How to Increase the ServiceNow Export Limit

Many organizations require the ability to increase the ServiceNow export limit to better support requirements and use cases such as reporting, training AI models and more. While the ServiceNow platform does allow the end-user to increase export limits, it isn’t typically the most efficient approach to improving ServiceNow data availability.
By the end of this article, you will understand:
- What is the ServiceNow export limit
- Why do these export limits exist
- How to overcome ServiceNow’s export limit
Whether you’re dealing with occasional large data exports or need to move hundreds of thousands of records daily, this article covers practical and effective solutions.
Don’t have time to read the entire article? Here’s a quick summary of the key points:
TL;DR: How to Increase the ServiceNow Export Limit
- ServiceNow’s Default Export Limit is 10,000 records for most file types like CSV, Excel, and XML, with additional restrictions for formats like PDF and Excel (e.g., 500,000 cells in Excel).
- Why Export Limits Exist: These limits ensure ServiceNow’s performance and prevent system overload during large data exports, which can slow down the platform or even cause it to crash.
- Challenges with Current Limits: The 10,000-record limit makes data analysis difficult for large datasets, and exporting during peak hours can slow down performance. Additionally, you may lose data relationships and context during the export.
- Solutions to Overcome Export Limits:
- Method 1: Adjust system properties to increase the export limit for occasional large exports. Although this is a quick fix, it degrades ServiceNow’s performance and becomes an operational bottleneck.
- Method 2: Use pagination to break down large datasets into smaller chunks. But it’s time-consuming and not feasible if you have to export millions of records.
- Method 3: Use data replication and/or integration solutions for larger, regular exports. Specialist solutions are required for substantial data movement (i.e. hundreds of thousands of records, per day).
- Recommended solution for high-throughput: A data replication solution like Perspectium DataSync that helps bypass export limits while maintaining performance, ensuring real-time synchronization and scalability.
- Choose the Right Solution: For occasional exports, adjusting system properties or using pagination works. For large-scale, regular exports or real-time data needs, invest in enterprise-grade solutions like Perspectium.
What Is ServiceNow’s Export Limit?
ServiceNow has a default export limit of 10,000 records per export for most file types. Some file types have additional limitations, such as PDF where the default limit is set to 1,000 rows, and further limitations are applied to other aspects such as the number of pages and columns.
Other technical limitations to the size of exports apply. For example, when an export is configured to be sent in an email, email size limits apply.
Exceeding ServiceNow export limits can result in partial exports and/or errors.
Default export limit table based on the file format
ServiceNow’s export limits across different file formats are controlled by specific system properties, as shown in the table below.
Format | Format-specific export limit | General export limit | Default export limit |
EXCEL (XLSX) | glide.xlsx.max_cells | N/A | 500,000 |
EXCEL (XLS) | glide.excel.max_cells | N/A | 500,000 |
XML | glide.xml.export.limit | glide.ui.export.limit | 10,000 |
CSV | glide.csv.export.limit | glide.ui.export.limit | 10,000 |
EXCEL (XLSX) | glide.xlsx.export.limit | glide.ui.export.limit | 10,000 |
EXCEL (XLS) | glide.excel.export.limit | glide.ui.export.limit | 10,000 |
JSON | glide.json.export.limit | glide.ui.export.limit | 10,000 |
glide.pdf.max_rows Note: The number of rows can be set from 0 to 5,000. If no value is specified, the default is 1,000. If a value greater than 5,000 is specified, the default value of 1,000 is used. | N/A | 1,000 | |
glide.pdf.max_detail_pages Note: The number of rows can be set from 0 to 250. If no value is specified, the default is 250. If a value greater than 250 is specified, the default value of 250 is used. | 250 | 250 | |
glide.pdf.max_columns Note: Only 25 header labels fit on a page. | N/A | 25 |
Why Do These Export Limits Exist in ServiceNow?
ServiceNow’s export limits restrict the amount of data per export to help maintain ServiceNow’s performance and stability. Handling large data sets, especially with complex files like Excel or PDF, can strain system resources, slow down processing, or even crash the platform.
While these default limits ensure exports remain manageable and prevent system overload, they present challenges to organizations with a need for timely access to ServiceNow data.
Whether you are an IT administrator trying to analyze incident trends, a business analyst creating executive reports, or a data professional trying to migrate data for compliance processes, ServiceNow’s export restrictions present a significant roadblock.
Over time, the issue is becoming more significant. Today’s enterprises generate millions of records across incidents, changes, assets, and service requests making ServiceNow export limits feel increasingly restrictive.
The Problem with ServiceNow Exports and Export Limits
ServiceNow’s out of the box export features come with several built-in limits that can impact how effectively you work with your data. Understanding these limits is the first step to finding a solution that works for your organization.
The 10,000 Record Roadblock
As discussed earlier, the most obvious limitation is the default export limit of 10,000 records for most file types, including Excel (.xlsx), CSV, and XML. This limit applies whether you’re exporting from ServiceNow reports or directly from platform lists. For many organizations, this makes thorough data analysis almost impossible, like analyzing incident trends over the past year to spot patterns and improve service delivery.
With thousands of incidents generated each month, you’ll quickly hit the 10,000-record limit. This leaves you with either incomplete data or hours spent manually splitting your export into smaller chunks. Both of these options are far from ideal for serious business analysis.
Performance Degradation Issues
Even if you stay within export limits—using workarounds such as splitting large exports into smaller ones—substantial overall export volumes can still degrade ServiceNow performance.
The platform is not designed to support real-time, high-throughput data extraction, and workarounds aimed at increasing the total volume of data exported from the platform slow query speeds and impacts system responsiveness for users. As a result, many organizations are forced to schedule exports outside operational hours, introducing another hurdle that limits the availability and timeliness of data.
Loss of Data Context and Relationships
A major issue with ServiceNow’s native export methods is the loss of crucial data context. When you export ServiceNow records to Excel or CSV, attachments, comments, and relationships between records can be lost. This makes your exported data incomplete and potentially misleading.
For example, an incident record might reference related change requests, knowledge articles, or configuration items—relationships that are vital for understanding the full picture. But these relationships get missed in standard exports, leaving you with isolated data instead of the interconnected insights that make ServiceNow’s data valuable.
Manual Process Bottlenecks
ServiceNow’s native export capabilities require significant manual intervention. There’s no built-in way to automate large exports, no in-built capabilities for data transformation, and a lack of sophisticated error-handling when exports fail. This manual overhead creates operational bottlenecks that scale poorly as your data needs grow.
The lack of automation also increases the risk of human error. When team members manually export, prepare and process data for monthly or quarterly reports, inconsistencies in filters, date ranges, or field selections can lead to inaccurate analysis and flawed decision making.
Static Data Problem
Once you export data using ServiceNow’s native capabilities, it becomes static, and there’s no automatic synchronization with the source system. This means your exported data is essentially outdated the moment it leaves the platform. In rapidly evolving IT environments where incidents, changes, and requests are constantly updated, working with static data can be worse than having no data at all.
All these challenges demonstrate how important it is to increase ServiceNow’s export limits. Without doing so, organizations are left with incomplete and outdated data, making it tough to make informed decisions in a timely manner.
How to Increase ServiceNow Export Limits (or Avoid Them Entirely)
ServiceNow’s export limits do not align with many organization’s demand for data. Fortunately, the system allows for its default export limit to be increased, providing an ad hoc workaround for export limits that organizations can apply when required.
In cases where there is a consistent requirement for exceeding default export limits, organizations should consider utilizing ServiceNow data replication and integration solutions. Such solutions allow organizations to avoid ServiceNow export limits entirely, and make the platform’s data available in third-party systems.
Here are a few approaches you can take to avoid, or increase ServiceNow export limits:
Method 1: Adjusting System Properties (Quick Fix)
As mentioned above, the ServiceNow’s default export limit is governed by system properties. As such, end-users can increase the limit by updating the Export Properties via System Properties.
Step-by-step process:
1. Use the filter navigator to access System Properties → Import Export (typing “export” in the search bar should take you straight there).

2. Locate the Export Row Limit property of the file type you want the increased limit to apply to.
3. Update the value to your desired limit (e.g., 20,000)

4. Adjust the Export Cell Limit if needed for very wide datasets
5. Save the changes. To check whether it worked as expected, first test with an export that slightly exceeds the previous limit.
Although this method can be used for occasional significant exports, ServiceNow strongly advises against setting extremely high limits. The 10,000-record limit exists to protect platform performance, and dramatically increasing it can cause system slowdowns, timeouts, and even crashes during peak usage periods.
This approach works best when you need occasional exports of 20,000-50,000 records and can schedule them during low-traffic periods. For regular large-scale exports or enterprise-level data movement, you will need data replication and/or integration solutions, as discussed below.
Benefits
- Quick and easy to configure
Drawbacks
- Very limited protection against performance slowdowns with performance issues likely
- Other technological limits may still apply
Method 2: Pagination
This is another workaround to export large datasets. In this approach, large exports are split into smaller, more manageable chunks.
Although this method is more time-consuming and requires more effort, it provides better control and reduces the risk of performance issues.
Process:
1. Prepare your dataset
First, use the export directly from a URL feature and narrow down the records you want to export by applying filters. Once filtered, check how many total records are returned.
2. Check the size of the result
If the number of rows is within your system’s export limit, you can download them directly. If the count is larger than the allowed threshold, you’ll need to split the export into smaller groups.
3. Retrieve the initial block of records
To collect the first 10,000 entries, use a sysparm query in line with the following syntax:
https://<instance name>.service-now.com/syslog_list.do?XML&sysparm_orderby=sys_id&sysparm_record_count=10000
This command returns the first 10,000 rows in order of their unique sys_id values.
4. Locate the starting point for the next batch
After exporting the first block, determine the sys_id of the first record in the next batch (i.e. the sys_id of record 10,001). One way to do this is:
- Create a database view of the table that includes the sys_id column.
- Open the view without adding filters.
- Sort the list by sys_id and scroll to record 10,001.
- Copy the sys_id value from that row.
5. Run a follow-up export using that ID
With the sys_id you copied, request the next set of rows using a “greater than or equal to” condition. For example:
https://<instance name>.service-now.com/syslog_list.do?XML&sysparm_query=sys_id%3E%3Db4aedb520a0a0b1001af10e278657d27&sysparm_orderby=sys_id&sysparm_record_count=10000
Here, notice that URL encoding replaces special symbols:
- > (greater than) becomes %3E
- = (equal to) becomes %3D
6. Repeat until complete
Continue applying this process, each time using the sys_id of the first record in the next block, until you’ve exported all required data.
Benefits:
- Helpful for occasional analysis of historical datasets
- Keeps each export smaller, offering some protection against performance slowdowns
Limitations:
- Time-consuming to set up and manage
- Requires manual effort to combine files for full analysis
- Does not address real-time data synchronization needs
- Still carries performance risks when overall export volumes are high
Method 3: ServiceNow Integration and Replication Solutions
If you need to move beyond ServiceNow’s native export options, integration solutions offer the most powerful and flexible way to work with your data. Instead of working within export limits, you can connect ServiceNow directly to external systems, data warehouses, or reporting platforms.
These solutions come in different forms. Traditional approaches include API and ETL solutions. However, for organizations requiring significant throughput, specialist, high-throughput solutions are required to sustain data movement without negatively impacting ServiceNow’s performance.
Key Considerations When Selecting an Integration and/or Data Replication Solution
1. ServiceNow’s REST API is an inherent bottleneck: Because of the way ServiceNow was architected, any retrieval of data that is facilitated by the REST API will increase the load on system resources and eventually impact performance.
This is because the REST API and the platform’s end-users both interact with the same application layer. Both API and ETL integration solutions typically require the REST API to operate. To avoid performance issues, ServiceNow provide API rate limiting capabilities out of the box.
As with export limits, these limits can be increased by the end-user at the risk of exacerbating performance issues on the platform. For the highest possible throughput, avoiding this dynamic is recommended—such as by leveraging push technology to initiate the movement of data out of the platform.
2. Dynamic and bulk data movement may require different solutions: For organizations that require event-based, dynamic data movement in real, or near-real time, ETL solutions will not be fit for purpose.
The most common technology to facilitate this use case is API, but organizations should also note that aforementioned ServiceNow API-related performance issues will often disrupt the distribution of data and limit or prevent real time data movement.
Conversely, for organizations that need to move large volumes of data in batches, ETL is recommended over API. Perspectium—a push technology based solution—is equally capable of real-time, dynamic data movement as it is of bulk data movement in batches. Perspectium also does not require API to initiate data movement out of ServiceNow, meaning performance is better preserved.
3. Maintenance overhead and complexity: Many API and ETL solutions are point to point integrations. This means that every time organizations need to add a new target system for ServiceNow data replication and/or integration, a new pipeline must be built or implemented.
For organizations handling the implementation of such solutions in-house, this puts an additional burden on internal resources. The burden is exacerbated by on-going maintenance requirements. From ensuring integrations remain operational through platform upgrades, to troubleshooting data loss and data quality issues, there is simply more to go wrong.
4. Scalability: When reliant on point-to-point, API and ETL based solutions, scalability is impeded by the organizations ability to effectively manage the solutions over time.
As mentioned above, the more technologies an organization introduces to facilitate data movement, the more potential there is for issues to arise. This can lead to a spiralling technical debt with a significant impact on operations reliant on timely access to data.
Scalability is further impeded by the aforementioned potential for performance issues. More integration and/or replication solutions competing for resources in order to retrieve data means more strain on ServiceNow performance.
When to Use ServiceNow APIs:
- Real-Time or Near Real-Time Data: APIs are great for event-based, dynamic data movement.
- Small to Medium Data Volumes: APIs excel when you have a frequent or consistent requirement to move data sets that aren’t too large.
- Custom Data Control: APIs give you granular control over what data you retrieve. For example, if you need specific fields or need to filter the data in certain ways, APIs allow you to fine-tune your requests.
- Ad-hoc Exports: If you need a one-time extraction or a specific dataset, APIs are ideal because you can programmatically trigger them when necessary without a pre-built integration pipeline.
You may also find this article useful: ServiceNow API Integration: A Comprehensive Guide
When to Use ETL Integrations
- Large-Scale, Batch Data Processing: ETL solutions are helpful when you need to move massive datasets, millions of records, over a scheduled period.
- Scheduled Data Exports: If your business requires regular, automated data exports (e.g., daily or weekly), ETL tools are great.
- Data Transformation: ETL tools are specifically built for data manipulation. If you need to transform the data (like cleaning, aggregating, or mapping) before moving it to a data warehouse or external system, ETL tools provide out-of-the-box features to handle this complexity.
- Long-Term, Enterprise-Scale Integrations: When you’re integrating ServiceNow with other enterprise systems (like SAP, Salesforce, or a data warehouse), ETL solutions are more scalable.
You may also find this article useful: A Guide to ServiceNow ETL Integration & Replication
When to Seek Alternatives
While APIs and ETL tools can meet many integration needs, there are scenarios where organizations should look beyond these traditional approaches. A high-throughput, replication-based solution like Perspectium is worth considering when:
- You need to move very large datasets regularly and millions of records per day or ongoing replication of entire tables, not just thousands of records at a time.
- Real-time or near real-time availability of data is essential and API rate limits or performance degradation make keeping up with demand impractical.
- Preserving ServiceNow performance is a priority: and you cannot afford query slowdowns or instability caused by heavy API or ETL activity.
- You require both batch and event-driven data movement and need a single solution that handles both equally well.
- Scalability is a long-term requirement and you expect your data integration needs to grow, and point-to-point API or ETL pipelines will not scale without significant technical debt.
- You want to reduce maintenance burden and avoid dedicating internal resources to constantly re-building and troubleshooting fragile pipelines.
Perspectium’s publish/subscribe replication model avoids the bottlenecks of API-based extraction and provides sustained, enterprise-scale throughput. However, it is a specialist solution that requires Perspectium professional services for implementation—best suited for organizations with complex, high-volume data movement needs.
About Perspectium DataSync: High-Throughput ServiceNow Data Replication
For organizations with the biggest demand for ServiceNow data, there is Perspectium.
How Perspectium Works
Perspectium works differently to typical ServiceNow connectors and replicators.
Push Technology, Not API
Instead of relying on the REST API to initiate data movement out of ServiceNow, Perspectium utilizes more efficient push technology, natively available in ServiceNow. This helps it avoid the impact to performance that is inherent when retrieving ServiceNow data via external API calls.
Publish and Subscribe, Not Point-to-Point
As well as benefiting from the use of push technology, Perspectium users also benefit from the solution’s publish and subscribe (pub/sub) model of data replication. With this approach, data is pushed out of ServiceNow, into a store-forward enabled message bus (The Perspectium MBS).
The MBS then distributes data to the destination target(s). This means Perspectium integrations are scalable, and one-to-many, allowing for multiple data pipelines to be introduced and for data to be distributed among them simultaneously, without any additional burden on ServiceNow.
Data Resilience, Not Data Loss
Thanks to Perspectium’s store-forward-enabled MBS, replicated/exported data is more resilient. This is because transferred data is queued within the MBS. As such, if there is an outage at the target system, data is not lost, and instead remains secure, within the MBS’ queue until the transfer can resume.
Some of the key benefits of using Perspectium are as follows:
- Maintain peak platform performance even during high-frequency or high-volume exports.
- Handle large-scale, high-throughput data movement with low latency, making it ideal for real-time dashboards and time-sensitive workflows.
- Trigger automatic data exports using Perspectium’s Dynamic Share feature, based on real-time conditions like record creation or updates.
- Enable one-to-many replication, allowing you to distribute ServiceNow data to multiple destinations without compromising system stability.
And most importantly, Perspectium is a fully managed service. This means that the platform, including the MBS, is implemented and maintained by Perspectium’s experts, reducing the burden on your internal IT teams. This ensures:
- Ongoing support and optimization
- Faster time to value
- Seamless alignment with ServiceNow’s release cycle, minimizing disruption during upgrades
Approach | Verified Capacity | Performance Impact | Real-time Capability | Maintenance Required |
Increased Limits | User reports up to 50,000-80,000 | High | No | Low |
Pagination | Unlimited (within individual limits) | Low to Medium | No | Medium |
API Solutions | Depends on solution and ServiceNow performance. Typically a few million per day, max. | Medium to High | Yes | Medium to High |
ETL Solution | Depends on solution and ServiceNow performance. Typically a few million per day, max. | Medium to High | No | Medium to High |
Perspectium | Over 40 million records per day | Low | Yes | Vendor maintained. |
Choosing the Right Approach for Your Organization
Choosing the right solution depends on your specific requirements, technical resources, and long-term data strategy.
For occasional exports of moderate size, adjusting system properties or implementing pagination strategies may suffice. However, if your organization regularly works with large datasets, requires real-time data access, or needs to integrate ServiceNow data with other business systems, a solution like Perspectium DataSync will provide better long-term value.
The key is to think beyond the immediate export problem and consider your broader data strategy. Remember that the goal isn’t just to export more data but to make that data more valuable for your organization.
Whether you choose to adjust limits, implement APIs, or invest in enterprise solutions, ensure your approach aligns with your organization’s data maturity and business objectives.
Want to know how DataSync can help your organization uncover the maximum potential of your ServiceNow? Contact us today!