← Back to blog
February 20, 2026

Why Salesforce Connect External Objects Beat Traditional ETL

Every Salesforce team that needs external data faces the same fork in the road. On one side is the familiar path: ETL. Extract data from the source system, transform it, load it into Salesforce custom objects. It is proven, well-understood, and how data integration has worked for decades.

On the other side is Salesforce Connect with External Objects. Instead of copying data, you query it in real time from the source system. The data never moves. Salesforce reaches out and fetches exactly what it needs, when it needs it.

Both approaches work. But they have fundamentally different trade-offs, and choosing the wrong one can cost you significant time, money, and operational headaches. Here is a detailed comparison to help you make the right call.

How ETL Works in a Salesforce Context

In a traditional ETL integration, a middleware platform or custom process connects to your external database, extracts rows based on a schedule or trigger, transforms them to match Salesforce's data model, and loads them into custom objects using Salesforce's Bulk API or REST API.

The data physically moves from your database into Salesforce's storage. Once loaded, it behaves exactly like any other Salesforce data. Users can view it, edit it, run reports on it, trigger automation from it, and query it with SOQL. From the user's perspective, the data is native.

Common ETL platforms for Salesforce include MuleSoft (Salesforce's own integration platform), Informatica Cloud, Talend, Jitterbit, and Workato. Many teams also build custom ETL processes using Python scripts, Apache Airflow, or Salesforce's own Data Loader for simpler scenarios.

How Salesforce Connect Works

Salesforce Connect uses External Objects to represent data that lives outside Salesforce. An External Object looks similar to a custom object in the Salesforce UI: it has fields, page layouts, list views, and appears in related lists. But it stores no data. Instead, it stores metadata that tells Salesforce where to find the data and how to query it.

When a user opens an External Object record, Salesforce sends a real-time request to the external system (via OData, or another supported protocol), retrieves the relevant data, and renders it in the UI. The query happens on demand, every time, and the data is always current.

Real-Time Data vs. Stale Copies

This is the most significant difference and the one that should drive your decision in most cases.

With ETL, your Salesforce data is a copy, and copies get stale. If your sync runs every hour, then during that hour, your Salesforce users might be looking at outdated information. For some data, this is perfectly acceptable. A customer's mailing address probably does not change between sync cycles. Annual revenue figures can tolerate hourly latency.

But other data cannot. Inventory levels change by the minute. Order statuses move through fulfillment stages throughout the day. Pricing can be updated by algorithms in real time. Support ticket statuses in external systems reflect current state that support agents need to see now, not an hour from now.

With Salesforce Connect, there is no latency. The data your users see is the data in your database at that exact moment. When a warehouse worker marks an order as shipped in your fulfillment system, the next Salesforce user who opens that record sees "Shipped." Not after the next sync. Immediately.

If your use case involves data that changes frequently and where recency matters, this advantage alone is often decisive.

Storage Costs: The Hidden ETL Tax

Salesforce charges for data storage, and it is not cheap. Depending on your edition, you get a base allocation (typically a few GB), and additional storage costs roughly $125 per GB per month for data storage, or $5 per month for additional record blocks.

Every record you load via ETL counts against this allocation. If you are syncing millions of rows from an external database, you could be consuming gigabytes of Salesforce storage just for copied data. That is data you already store and pay for in your source database. With ETL, you are paying to store it twice.

External Objects consume zero Salesforce data storage. Zero. They are metadata definitions only, describing the shape and location of external data. Whether your external database has a thousand rows or a hundred million, your Salesforce storage usage does not change.

For organizations dealing with large data volumes, this difference alone can save thousands of dollars per year in Salesforce storage costs. It can also eliminate uncomfortable conversations with Salesforce about storage upgrades.

Maintenance and Operational Overhead

ETL pipelines are living systems that require ongoing care.

Schema drift is the most common source of ETL failures. When someone adds a column to the source database, renames a field, or changes a data type, the ETL mapping breaks. Depending on how the pipeline is configured, this might result in silent data loss (the new column is ignored), sync failures (the job errors out), or data corruption (a type mismatch causes incorrect values). Someone needs to monitor for these changes and update the mappings.

API rate limits constrain how much data you can load in a given time window. Salesforce's Bulk API has limits on concurrent jobs, batch sizes, and daily API calls. Large syncs can hit these limits, especially if other integrations are also consuming API capacity. Rate limit management adds complexity to your ETL configuration.

Failure recovery is surprisingly nuanced. When an ETL job fails midway through a batch, you need to determine which records were successfully loaded, which were not, and how to resume without creating duplicates. Most ETL platforms handle this to some degree, but edge cases (network timeouts, partial batch failures, deadlocks) require attention.

Monitoring is essential. You need alerts for job failures, data quality checks to verify that synced data matches the source, and dashboards to track sync latency and throughput. This monitoring infrastructure needs to be set up, maintained, and responded to.

Salesforce Connect with a managed OData endpoint eliminates almost all of this operational overhead. There is no sync pipeline to monitor because there is no sync. Schema changes in the source database are reflected in the OData metadata; you update the External Object definition in Salesforce by clicking Validate and Sync. There are no API rate limits to manage because External Objects query data on demand without using the Bulk API. Failure recovery is a non-issue because there is no batch process to fail.

Feature Differences to Consider

External Objects are not identical to custom objects. Understanding the differences helps you assess whether Salesforce Connect is viable for your specific use case.

What External Objects support: list views, detail pages, record feeds, related lists (including indirect lookups), SOQL queries, reports and dashboards, Salesforce Mobile, Lightning pages, and global search.

What External Objects do not support: Apex triggers (use platform events instead if you need automation), workflow rules and process builder running directly on External Object changes, record-owned sharing rules (External Objects use the org-wide default for the External Object), and formula fields that reference External Object fields from a standard object.

Write-back. External Objects through OData are read-only. If your users need to write data back to the external system from Salesforce, you will need a separate mechanism for that (such as a screen flow that calls an API, or a complementary integration for the write path).

For many use cases, especially where external data is reference data that users view but do not modify in Salesforce, these limitations are perfectly acceptable. Order history, inventory levels, transaction logs, usage metrics, and warehouse data are all naturally read-only in a Salesforce context.

When ETL Is Still the Right Choice

ETL is not going away, and for certain scenarios it remains the better option.

When you need Salesforce automation on external data. If business processes depend on workflow rules, flows, or triggers firing when external data changes, you need that data in Salesforce as custom objects. External Objects do not trigger automation.

When you need complex cross-object reporting. While External Objects support reports, complex reports that join external data with standard Salesforce data across multiple levels can be limited. If your reporting requirements are sophisticated, having the data natively in Salesforce gives you more flexibility.

When offline or cached access matters. Salesforce Mobile with External Objects requires network connectivity to the external system. If your field users work in areas with poor connectivity, having the data stored locally in Salesforce ensures access.

When data transformation is essential. If the raw data from your source database needs significant transformation, aggregation, or enrichment before it is useful in Salesforce, ETL gives you a natural place to perform that processing.

When Salesforce Connect Is the Better Path

Salesforce Connect shines in scenarios where freshness, cost, and simplicity are priorities.

When data changes frequently and users need current values. Real-time access means no sync lag.

When data volumes are large. External Objects do not consume Salesforce storage regardless of how many rows exist in the source.

When the data is reference or read-only. Viewing external records without modifying them is the sweet spot for External Objects.

When you want low maintenance. No pipelines, no sync monitoring, no failure recovery.

When budget is constrained. Avoiding both ETL platform licensing and Salesforce storage costs makes Salesforce Connect significantly cheaper for many integration patterns.

Making Salesforce Connect Practical with ForceCnx

The biggest barrier to Salesforce Connect adoption has historically been the OData endpoint. Salesforce Connect needs one, but most databases do not have one, and building a compliant OData 4.0 server is a project in itself.

ForceCnx removes that barrier. It connects to your PostgreSQL, Google Cloud SQL, or Amazon RDS database, introspects the schema, and generates a production-ready OData 4.0 endpoint. You configure which tables to expose and how fields map to Salesforce, and you get a working endpoint that Salesforce Connect can consume immediately.

With ForceCnx handling the OData layer, the decision between ETL and Salesforce Connect becomes purely about which architecture fits your requirements, not about which one is easier to implement. Both become achievable. But for the growing number of use cases where real-time, read-only access to external data is what your team needs, Salesforce Connect with ForceCnx is the simpler, cheaper, and more maintainable choice.

Try it with the free tier: one connection, five entities, and zero code required. See your external data in Salesforce in real time, and then decide whether you still need that ETL pipeline.

Ready to connect your database to Salesforce?

ForceCnx lets you expose PostgreSQL tables and views as Salesforce external objects in minutes, with no code and no ETL pipelines.

Get Started Free