How to integrate with the Tweets counts endpoints

This page contains information on several tools and key concepts that you should be aware of as you integrate the recent or full-archive Tweet counts endpoints into your system. We’ve split the page into the following sections:

 

Helpful tools

Before we start to explore some key concepts, we recommend that you use one of the following tools or code samples to start testing the functionality of these endpoints.

Code samples

Interested in getting set up with these endpoints with some code in your preferred coding language? We’ve got a handful of different code samples available that you can use as a starting point on our GitHub page, including a Python client.

Libraries

Take advantage of one of our many community third-party libraries to help you get started. You can find a library that works with the v2 endpoints by looking for the appropriate version tag.

Postman

Postman is a great tool that you can use to test out these endpoints. Each Postman request includes all of the given endpoint’s parameters to help you quickly understand what is available to you. To learn more about our Postman collections, please visit our Using Postman page.
 

Key concepts

Authentication

All Twitter API v2 endpoints require requests to be authenticated with a set of credentials, also known as keys and tokens. This specific endpoint requires the use of OAuth 2.0 Bearer Token, which means that you must pass a Bearer Token to make a successful request. You can either generate a Bearer Token from directly within a developer App, or generate one using the POST oauth2/token endpoint.


Developer portal, Projects, and developer Apps

To work with any Twitter API v2 endpoints, you must have an approved developer account, set up a Project within that account, and created a developer App within that Project. Your keys and tokens within that developer App will work for these Tweet counts endpoints.

You can use keys and tokens associated with either a standard Project or an academic Project to make requests to the recent search endpoint. However, you will need to use an academic research Project to make requests to the full-archive Tweet counts endpoint. If you are using an academic research Project, you will have access to additional functionality across both Tweet counts endpoints, including the availability of additional operators and longer query lengths.

Please visit our section on academic research or page on product tracks to learn more.
 

Rate limits

Every day, many thousands of developers make requests to the Twitter API. To help manage the volume, rate limits are placed on each endpoint that limits the number of requests that every developer can make on behalf of an app or on behalf of an authenticated user.

This endpoint is rate limited at the App-level, meaning that you, the developer, can only make a certain number of requests to this endpoint over a given period of time from any given App (assumed by the credentials that you are using). 

Building queries

The central feature of these endpoints is their use of a single query to filter the Tweets into the counts that deliver to you. These queries are made up of operators that match on Tweet and user attributes, such as message keywords, hashtags, and URLs. Operators can be combined into queries with boolean logic and parentheses to help refine the query's matching behavior.

You can use our guide on how to build a query to learn more.

We have also written a more in-depth tutorial on how to build high-quality filters for getting Twitter data.
 

Pagination

For recent Tweet counts, there is no next_token returned, which means that regardless of the granularity, you will get  the Tweet volume for the last 7 days in one API call.

For full-archive Tweet counts, you will get counts by hour for the last 30 days by default. For data more than 30 days, you will get a next_token which you can then use to paginate to get the additional data.  The counts endpoint paginates at 31 days. For example, setting a day granularity, will return the count of results per day for 31 days per page.  Setting an hour granularity, will return the count of results per hour for 744 (31 days x 24 hours) hours per page.

 

Was this document helpful?
Thank you

Thank you for the feedback. We’re really glad we could help!

Thank you for the feedback. How could we improve this document?
Thank you for the feedback. Your comments will help us improve our documents in the future.