Almost every Salesforce org eventually faces the same challenge: there is critical business data living in an external database that users need to access without leaving Salesforce. Maybe it is order history in PostgreSQL, inventory data in a cloud-hosted database, financial records in a data warehouse, or customer usage metrics from your product's backend.
The question is not whether to integrate this data. It is how. And the answer depends on your priorities: Do you need real-time data or can you tolerate latency? How much are you willing to spend? Do you have developers available, or does the solution need to work for admins? How much ongoing maintenance can you absorb?
This article compares the five most common approaches to accessing external databases in Salesforce, with an honest look at the strengths and limitations of each.
Option 1: Traditional ETL (MuleSoft, Informatica, Talend)
ETL (Extract, Transform, Load) is the most established approach to data integration. You extract data from your source database, transform it to fit Salesforce's schema, and load it into custom objects using the Salesforce API.
How it works. An ETL platform connects to your database, runs extraction queries on a schedule (hourly, daily, or triggered), maps source columns to Salesforce fields, and uses the Bulk API or REST API to insert or update records in Salesforce.
Strengths. ETL gives you full control over data transformation. You can clean, deduplicate, aggregate, and reshape data before it reaches Salesforce. The data lives natively in Salesforce, so it works with every Salesforce feature: workflows, process builder, validation rules, reports, and SOQL. Mature ETL platforms like MuleSoft (owned by Salesforce), Informatica, and Talend have extensive connector libraries and enterprise support.
Limitations. Data is always stale. Even with aggressive sync schedules, there is a window between when a source record changes and when Salesforce reflects that change. Every synced record counts against your Salesforce data storage allocation, which can become expensive at scale. ETL pipelines require ongoing maintenance: schema changes break mappings, API limits throttle large syncs, and failure recovery needs monitoring. MuleSoft licensing alone can cost tens of thousands of dollars per year, on top of your existing Salesforce spend.
Best for: Large-scale batch integrations where data latency is acceptable and you need to apply complex transformations before data reaches Salesforce.
Option 2: Salesforce Connect (External Objects via OData)
Salesforce Connect takes the opposite approach from ETL. Instead of copying data into Salesforce, it queries the external system on demand using External Objects.
How it works. You configure an External Data Source in Salesforce that points to an OData endpoint. Salesforce creates External Objects that represent your external tables. When a user views a record, runs a report, or executes a SOQL query, Salesforce sends an OData request to the external endpoint, retrieves the data in real time, and displays it as if it were native.
Strengths. Data is always fresh because it is queried live from the source. External Objects do not consume Salesforce data storage. There is no sync pipeline to monitor or maintain. External Objects support list views, detail pages, related lists, reports, and SOQL queries. Setup is configuration-based, not code-based.
Limitations. You need an OData endpoint, and most databases do not natively expose one. Performance depends on the external system's response time and Salesforce's callout timeout limits. External Objects have some feature limitations compared to standard/custom objects: they do not support triggers, workflow rules, or process builder directly. Row limits per query (typically 2,000) mean External Objects are best for targeted queries, not bulk data browsing.
Best for: Real-time access to external data where freshness matters, storage costs are a concern, and the integration needs to be low-maintenance.
Option 3: Heroku Connect
Heroku Connect was Salesforce's managed data sync service between Heroku Postgres databases and Salesforce orgs.
How it works. Heroku Connect maintained a bidirectional sync between Heroku Postgres tables and Salesforce objects. You mapped Salesforce objects to Postgres tables, and Heroku Connect handled the ongoing synchronization in near-real-time.
Strengths. When it was actively supported, Heroku Connect offered a well-integrated experience for teams already on the Heroku platform. The sync was managed, requiring minimal DevOps effort. Bidirectional sync meant changes in Postgres could flow to Salesforce and vice versa.
Limitations. Heroku Connect only works with Heroku Postgres, not with external PostgreSQL, Cloud SQL, RDS, or other databases. Salesforce has been progressively deprioritizing the Heroku platform, and the long-term viability of Heroku Connect is uncertain. It is also limited to PostgreSQL; if your data is in MySQL, SQL Server, or another engine, Heroku Connect is not an option. Like ETL, synced data counts against Salesforce storage.
Best for: Teams already deeply invested in Heroku Postgres who need bidirectional sync. Not recommended for new integrations given the platform's uncertain future.
Option 4: Custom API Integration
Some teams build custom integrations using Salesforce's REST or SOAP APIs, Apex callouts, or middleware services.
How it works. A developer writes Apex code (or an external service) that calls out to your database's API on demand, or builds a scheduled batch process that queries the external database and upserts records into Salesforce. This can range from simple Apex HTTP callouts to full middleware stacks with queuing, retry logic, and error handling.
Strengths. Maximum flexibility. You can implement exactly the integration pattern your use case requires. No dependency on third-party platforms or specific protocols. You can combine real-time callouts (for on-demand data) with batch processes (for background sync) in whatever mix makes sense.
Limitations. This is the most resource-intensive option. You need developers who understand both Salesforce's Apex language and your external database. Custom integrations accumulate technical debt: they need to be updated when APIs change, maintained when Salesforce releases updates, and monitored for failures. Testing is complex because you are spanning two platforms. Apex callout limits (100 callouts per transaction, 120-second timeout) constrain what you can do in real time.
Best for: Highly custom integration requirements that do not fit neatly into any standard pattern, and organizations with dedicated Salesforce developers on staff.
Option 5: ForceCnx (OData Endpoint for Salesforce Connect)
ForceCnx is purpose-built to solve the biggest barrier to adopting Salesforce Connect: the lack of a simple, ready-made OData endpoint for your database.
How it works. ForceCnx connects to your external database (PostgreSQL, Google Cloud SQL, or Amazon RDS), introspects the schema, and generates a fully compliant OData 4.0 endpoint. You choose which tables to expose and how columns map to Salesforce fields. Then you point Salesforce Connect at the ForceCnx endpoint and sync your External Objects. The entire process takes minutes, not weeks.
Strengths. Combines the real-time data access and zero-storage benefits of Salesforce Connect with a setup experience that requires no code and no OData expertise. Schema introspection is automatic, so you do not need to manually define your data model. Field mapping lets you customize how your data appears in Salesforce without changing the source database. Supports multiple database types and cloud providers. The free tier (one connection, five entities) lets you validate the approach before committing.
Limitations. ForceCnx is the OData layer only. It does not support bidirectional sync (Salesforce Connect External Objects are read-only by design). If you need write-back to the external database from Salesforce, you will need a complementary solution for that direction. The same External Object limitations that apply to Salesforce Connect apply here: no triggers, no workflow rules directly on external records, and row limits per query.
Best for: Teams that want the benefits of Salesforce Connect but do not have the time, budget, or expertise to build and maintain their own OData server.
Side-by-Side Comparison
When choosing between these options, the key decision factors are data freshness, cost, complexity, and ongoing maintenance.
Data freshness. Salesforce Connect (Options 2 and 5) provide real-time data. ETL (Option 1) and Heroku Connect (Option 3) have sync latency ranging from minutes to hours. Custom integrations (Option 4) can be real-time or batch depending on implementation.
Salesforce storage impact. Only Salesforce Connect avoids consuming Salesforce data storage. Every other option copies data into Salesforce, and every copied record counts against your storage limits.
Setup complexity. ForceCnx (Option 5) and Heroku Connect (Option 3) are the simplest to set up. ETL platforms (Option 1) require significant configuration. Custom integrations (Option 4) require the most development effort. Salesforce Connect without a pre-built OData endpoint (Option 2 alone) requires building the OData server yourself.
Ongoing maintenance. Salesforce Connect with ForceCnx has the lowest maintenance overhead because there is no sync pipeline to monitor. ETL pipelines and custom integrations require active monitoring and maintenance. Heroku Connect is managed but tied to Heroku's platform.
Cost. ForceCnx's free tier makes initial validation free. ETL platforms like MuleSoft carry significant licensing costs. Custom integrations have high development costs. Salesforce Connect licensing is required regardless of which OData endpoint you use.
Making the Decision
If your primary need is real-time data access without the overhead of data duplication, Salesforce Connect is the right architecture. The question then becomes how to provide the OData endpoint.
If you have developers comfortable with the OData specification and time to build and maintain a custom server, you can do it yourself. But for most teams, especially those where Salesforce admins are driving the integration rather than developers, a managed OData service like ForceCnx is the practical choice.
Start by identifying the specific tables your Salesforce users need to access. Connect your database to ForceCnx, expose those tables, and configure the External Data Source in Salesforce. The free tier gives you enough capacity to run a proof of concept. If the approach works for your team, and for most teams it does, you can expand from there.
The best integration is the one that works reliably without constant attention. Salesforce Connect with a managed OData endpoint gives you exactly that: live data in Salesforce, zero storage impact, and nothing to maintain.