How to Use Python with APIs (Requests and JSON Handling)

Illustration of Python interacting with web APIs: code snippets using requests, JSON responses, arrows showing request/response flow, data parsing, authentication and JSON output.!

How to Use Python with APIs (Requests and JSON Handling)
SPONSORED

Sponsor message — This article is made possible by Dargslan.com, a publisher of practical, no-fluff IT & developer workbooks.

Why Dargslan.com?

If you prefer doing over endless theory, Dargslan’s titles are built for you. Every workbook focuses on skills you can apply the same day—server hardening, Linux one-liners, PowerShell for admins, Python automation, cloud basics, and more.


Why Mastering API Integration Matters in Modern Development

In today's interconnected digital landscape, the ability to communicate with external services and data sources has become fundamental to building meaningful applications. Whether you're pulling weather data, integrating payment systems, accessing social media platforms, or connecting to cloud services, APIs (Application Programming Interfaces) serve as the bridges that make these interactions possible. Python, with its elegant syntax and powerful libraries, has emerged as one of the most accessible languages for working with APIs, enabling developers at all levels to harness the vast ecosystem of web services available today.

API integration represents the process of sending structured requests to external servers and receiving data in return, typically formatted as JSON (JavaScript Object Notation). This exchange allows your applications to leverage capabilities far beyond what you could build alone—from machine learning models hosted in the cloud to real-time financial data feeds. Understanding how to effectively work with APIs transforms your programs from isolated scripts into connected applications that can interact with the broader digital world, access constantly updated information, and provide users with dynamic, relevant experiences.

Throughout this comprehensive guide, you'll discover the practical techniques for making HTTP requests using Python's requests library, handling JSON data with confidence, managing authentication securely, implementing error handling strategies, and optimizing your API interactions for production environments. You'll learn not just the technical mechanics, but also the best practices that separate functional code from robust, maintainable solutions. By the end, you'll possess the knowledge to confidently integrate virtually any REST API into your Python projects, troubleshoot common issues, and build applications that communicate seamlessly with external services.

Understanding the Foundation: APIs and HTTP Requests

Before diving into code, it's essential to understand what happens when your Python application communicates with an API. At its core, an API interaction involves your application (the client) sending an HTTP request to a server, which processes that request and returns a response. This request-response cycle follows specific protocols and conventions that ensure reliable communication between systems.

HTTP requests come in several types, commonly called methods or verbs. The GET method retrieves data from a server without modifying anything—think of it as asking a question. The POST method sends data to create new resources, like submitting a form. PUT and PATCH update existing resources, while DELETE removes them. Each request includes headers (metadata about the request), a URL (the address of the resource), and optionally a body (data you're sending).

"The beauty of REST APIs lies in their predictability—once you understand the pattern, you can work with thousands of different services using the same fundamental approach."

Responses from APIs typically include a status code indicating success or failure, headers with metadata, and a body containing the actual data. Status codes in the 200 range indicate success, 400s indicate client errors (like bad requests), and 500s indicate server errors. The response body most commonly contains JSON data, a lightweight format that's both human-readable and easy for machines to parse.

Installing and Importing Essential Libraries

Python's requests library simplifies HTTP interactions dramatically compared to working with lower-level networking code. While Python includes a built-in urllib module, requests provides a much more intuitive interface that handles many complexities automatically.

Installing requests is straightforward using pip, Python's package manager:

pip install requests

Once installed, you'll typically import both requests and json at the beginning of your scripts:

import requests
import json

The json module comes standard with Python and provides functions for converting between JSON strings and Python data structures. While requests can often handle JSON conversion automatically, understanding the json module gives you finer control when needed.

Making Your First GET Request

Let's start with the most common API operation—retrieving data. A basic GET request requires just the API endpoint URL:

response = requests.get('https://api.example.com/data')
print(response.status_code)
print(response.text)

This simple code sends a request and stores the entire response object. The status_code attribute tells you whether the request succeeded, while text contains the raw response body as a string. However, when working with JSON APIs, you'll typically want to convert the response into Python data structures:

response = requests.get('https://api.example.com/data')
data = response.json()
print(data)

The .json() method automatically parses the JSON response into Python dictionaries and lists, making it easy to access specific values. This conversion happens seamlessly, transforming JSON objects into Python dictionaries and JSON arrays into Python lists.

Working with Query Parameters and Headers

Most real-world APIs require additional information beyond just the endpoint URL. Query parameters allow you to filter, sort, or specify exactly what data you want. Rather than manually constructing URL strings with parameters, requests provides a clean way to pass them as a dictionary:

params = {
    'category': 'technology',
    'limit': 10,
    'sort': 'date'
}

response = requests.get('https://api.example.com/articles', params=params)

The requests library automatically formats these parameters correctly, handling URL encoding and constructing the final URL as https://api.example.com/articles?category=technology&limit=10&sort=date.

Headers serve a different purpose—they provide metadata about your request. Common uses include specifying content types, sending authentication tokens, and identifying your application:

headers = {
    'User-Agent': 'MyApp/1.0',
    'Accept': 'application/json',
    'Authorization': 'Bearer your_token_here'
}

response = requests.get('https://api.example.com/data', headers=headers)
"Proper header configuration isn't just good practice—it's often the difference between a successful API call and a rejected request."
Header Type Purpose Example Value
Authorization Authenticate your requests Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...
Content-Type Specify format of data you're sending application/json
Accept Specify format you want to receive application/json
User-Agent Identify your application MyApplication/2.0 (contact@example.com)

Sending Data with POST, PUT, and PATCH Requests

While GET requests retrieve data, POST requests send data to create new resources. When working with JSON APIs, you'll typically send data as a JSON payload in the request body:

new_user = {
    'username': 'johndoe',
    'email': 'john@example.com',
    'age': 30
}

response = requests.post('https://api.example.com/users', json=new_user)
print(response.json())

Notice the json parameter—when you pass a dictionary to this parameter, requests automatically converts it to JSON format and sets the appropriate Content-Type header. This is cleaner than manually converting with json.dumps() and setting headers yourself.

PUT requests typically replace an entire resource, while PATCH requests update specific fields:

# Update entire user resource
updated_user = {
    'username': 'johndoe',
    'email': 'newemail@example.com',
    'age': 31
}
response = requests.put('https://api.example.com/users/123', json=updated_user)

# Update only specific fields
partial_update = {
    'email': 'newemail@example.com'
}
response = requests.patch('https://api.example.com/users/123', json=partial_update)

Handling Form Data and File Uploads

Not all APIs use JSON—some expect traditional form data or file uploads. The requests library handles these scenarios elegantly:

# Sending form data
form_data = {
    'username': 'johndoe',
    'password': 'secretpass'
}
response = requests.post('https://api.example.com/login', data=form_data)

# Uploading files
files = {'file': open('document.pdf', 'rb')}
response = requests.post('https://api.example.com/upload', files=files)

When using the data parameter instead of json, requests sends the information as form-encoded data. For file uploads, the files parameter handles the multipart encoding automatically.

Parsing and Navigating JSON Response Data

Once you've received a JSON response, you'll need to extract the specific information you need. JSON structures map directly to Python data types—objects become dictionaries, arrays become lists, and primitive values remain as strings, numbers, or booleans.

response = requests.get('https://api.example.com/user/profile')
data = response.json()

# Accessing dictionary values
username = data['username']
email = data['email']

# Accessing nested data
city = data['address']['city']
country = data['address']['country']

# Iterating through arrays
for skill in data['skills']:
    print(skill['name'], skill['level'])

Real-world API responses often contain deeply nested structures. Using .get() instead of bracket notation provides safer access with default values:

# Safer approach with default values
username = data.get('username', 'Unknown')
city = data.get('address', {}).get('city', 'Not specified')
"Defensive programming when parsing API responses saves countless hours of debugging—always assume data might be missing or formatted unexpectedly."

Converting Between JSON and Python Objects

Sometimes you'll need to work with JSON data manually, especially when reading from files or working with string data:

# Converting JSON string to Python object
json_string = '{"name": "Alice", "age": 28}'
python_dict = json.loads(json_string)

# Converting Python object to JSON string
python_dict = {'name': 'Bob', 'age': 35}
json_string = json.dumps(python_dict, indent=2)

# Reading JSON from a file
with open('data.json', 'r') as file:
    data = json.load(file)

# Writing JSON to a file
with open('output.json', 'w') as file:
    json.dump(data, file, indent=2)

The indent parameter makes the JSON output human-readable, which is helpful for debugging but unnecessary in production. The difference between loads/dumps (working with strings) and load/dump (working with files) is subtle but important.

Implementing Authentication Strategies

Most production APIs require authentication to identify users, enforce rate limits, and protect sensitive data. Different APIs use different authentication methods, but several patterns are common.

🔐 API Keys

The simplest authentication method involves including an API key in your requests, either as a query parameter or header:

# API key as query parameter
params = {'api_key': 'your_api_key_here'}
response = requests.get('https://api.example.com/data', params=params)

# API key as header (more secure)
headers = {'X-API-Key': 'your_api_key_here'}
response = requests.get('https://api.example.com/data', headers=headers)

Never hardcode API keys directly in your source code. Instead, use environment variables or configuration files:

import os

api_key = os.environ.get('API_KEY')
headers = {'X-API-Key': api_key}
response = requests.get('https://api.example.com/data', headers=headers)

🔑 Bearer Tokens and OAuth

Many modern APIs use bearer tokens, often obtained through OAuth flows. These tokens typically go in the Authorization header:

headers = {
    'Authorization': 'Bearer your_access_token_here'
}
response = requests.get('https://api.example.com/protected', headers=headers)

For OAuth flows, you'll typically need to exchange credentials for an access token first:

token_url = 'https://api.example.com/oauth/token'
credentials = {
    'grant_type': 'client_credentials',
    'client_id': 'your_client_id',
    'client_secret': 'your_client_secret'
}

token_response = requests.post(token_url, data=credentials)
access_token = token_response.json()['access_token']

# Use the token for subsequent requests
headers = {'Authorization': f'Bearer {access_token}'}
response = requests.get('https://api.example.com/data', headers=headers)

🛡️ Basic Authentication

Some APIs use HTTP Basic Authentication, where you send a username and password. The requests library provides a convenient way to handle this:

from requests.auth import HTTPBasicAuth

response = requests.get(
    'https://api.example.com/data',
    auth=HTTPBasicAuth('username', 'password')
)

# Shorthand syntax
response = requests.get(
    'https://api.example.com/data',
    auth=('username', 'password')
)
Authentication Method Security Level Common Use Cases Implementation Complexity
API Keys Medium Public APIs, service-to-service Low
Bearer Tokens/OAuth High User-specific data, social platforms Medium to High
Basic Auth Low (without HTTPS) Internal tools, legacy systems Low
JWT Tokens High Microservices, modern web apps Medium

Error Handling and Response Validation

Robust API integration requires anticipating and handling failures gracefully. Network issues, server errors, invalid responses, and rate limiting can all occur in production environments.

"The mark of professional code isn't that it never fails—it's that it fails gracefully and provides meaningful information when things go wrong."

Checking Response Status

Always verify that requests succeeded before processing response data:

response = requests.get('https://api.example.com/data')

if response.status_code == 200:
    data = response.json()
    print("Success:", data)
elif response.status_code == 404:
    print("Resource not found")
elif response.status_code == 401:
    print("Authentication failed")
else:
    print(f"Request failed with status code: {response.status_code}")

The requests library provides a convenient method to raise exceptions for error status codes:

try:
    response = requests.get('https://api.example.com/data')
    response.raise_for_status()  # Raises HTTPError for bad status codes
    data = response.json()
except requests.exceptions.HTTPError as e:
    print(f"HTTP error occurred: {e}")
except requests.exceptions.ConnectionError:
    print("Failed to connect to the server")
except requests.exceptions.Timeout:
    print("Request timed out")
except requests.exceptions.RequestException as e:
    print(f"An error occurred: {e}")

⚠️ Handling JSON Parsing Errors

Not all responses contain valid JSON, even when you expect them to. Always wrap JSON parsing in error handling:

response = requests.get('https://api.example.com/data')

try:
    data = response.json()
except json.JSONDecodeError:
    print("Response was not valid JSON")
    print("Raw response:", response.text)

⏱️ Setting Timeouts

Without timeouts, your application can hang indefinitely waiting for slow servers. Always specify timeout values:

# Timeout after 5 seconds
response = requests.get('https://api.example.com/data', timeout=5)

# Separate connection and read timeouts
response = requests.get('https://api.example.com/data', timeout=(3, 10))

The tuple format (connection_timeout, read_timeout) gives you finer control—the first value limits connection establishment time, the second limits how long to wait for data once connected.

🔄 Implementing Retry Logic

Transient network issues are common. Implementing retry logic with exponential backoff makes your code more resilient:

import time

def fetch_with_retry(url, max_retries=3):
    for attempt in range(max_retries):
        try:
            response = requests.get(url, timeout=5)
            response.raise_for_status()
            return response.json()
        except requests.exceptions.RequestException as e:
            if attempt == max_retries - 1:
                raise
            wait_time = 2 ** attempt  # Exponential backoff
            print(f"Attempt {attempt + 1} failed, retrying in {wait_time}s...")
            time.sleep(wait_time)

data = fetch_with_retry('https://api.example.com/data')

Working with Pagination and Large Datasets

APIs often limit response sizes and split large datasets across multiple pages. Understanding pagination patterns is essential for retrieving complete datasets.

📄 Offset-Based Pagination

This common pattern uses offset and limit parameters to navigate through results:

all_items = []
offset = 0
limit = 100

while True:
    params = {'offset': offset, 'limit': limit}
    response = requests.get('https://api.example.com/items', params=params)
    data = response.json()
    
    items = data['items']
    all_items.extend(items)
    
    if len(items) < limit:
        break  # No more pages
    
    offset += limit

print(f"Retrieved {len(all_items)} total items")

📑 Page-Based Pagination

Some APIs use page numbers instead of offsets:

all_items = []
page = 1

while True:
    params = {'page': page, 'per_page': 50}
    response = requests.get('https://api.example.com/items', params=params)
    data = response.json()
    
    items = data['items']
    all_items.extend(items)
    
    if page >= data['total_pages']:
        break
    
    page += 1

🔗 Cursor-Based Pagination

Modern APIs often use cursor-based pagination, which is more efficient for large datasets:

all_items = []
cursor = None

while True:
    params = {'limit': 100}
    if cursor:
        params['cursor'] = cursor
    
    response = requests.get('https://api.example.com/items', params=params)
    data = response.json()
    
    all_items.extend(data['items'])
    
    cursor = data.get('next_cursor')
    if not cursor:
        break  # No more pages
"Cursor-based pagination isn't just more efficient—it also prevents the duplicate or missing records that can occur with offset-based approaches when data changes during iteration."

Rate Limiting and Respectful API Usage

Most APIs enforce rate limits to prevent abuse and ensure fair resource allocation. Exceeding these limits typically results in temporary blocks or errors. Implementing proper rate limiting in your code demonstrates professionalism and ensures reliable operation.

⏲️ Understanding Rate Limit Headers

APIs often communicate rate limit information through response headers:

response = requests.get('https://api.example.com/data')

# Check rate limit headers
rate_limit = response.headers.get('X-RateLimit-Limit')
remaining = response.headers.get('X-RateLimit-Remaining')
reset_time = response.headers.get('X-RateLimit-Reset')

print(f"Rate limit: {rate_limit} requests")
print(f"Remaining: {remaining} requests")
print(f"Resets at: {reset_time}")

⚡ Implementing Rate Limiting

You can implement client-side rate limiting to stay within API constraints:

import time

class RateLimiter:
    def __init__(self, max_calls, period):
        self.max_calls = max_calls
        self.period = period
        self.calls = []
    
    def wait_if_needed(self):
        now = time.time()
        self.calls = [call for call in self.calls if call > now - self.period]
        
        if len(self.calls) >= self.max_calls:
            sleep_time = self.period - (now - self.calls[0])
            time.sleep(sleep_time)
        
        self.calls.append(time.time())

# Allow 10 requests per minute
limiter = RateLimiter(max_calls=10, period=60)

for i in range(50):
    limiter.wait_if_needed()
    response = requests.get('https://api.example.com/data')
    print(f"Request {i+1} completed")

Advanced Techniques: Sessions and Connection Pooling

When making multiple requests to the same API, using a Session object improves performance by reusing TCP connections and persisting settings across requests:

session = requests.Session()

# Set headers that will apply to all requests
session.headers.update({
    'Authorization': 'Bearer your_token',
    'User-Agent': 'MyApp/1.0'
})

# Make multiple requests using the session
response1 = session.get('https://api.example.com/users')
response2 = session.get('https://api.example.com/posts')
response3 = session.get('https://api.example.com/comments')

session.close()  # Clean up when done

Sessions maintain cookies automatically, which is useful for APIs that use session-based authentication:

session = requests.Session()

# Login and store session cookies
login_data = {'username': 'user', 'password': 'pass'}
session.post('https://api.example.com/login', data=login_data)

# Subsequent requests automatically include session cookies
profile = session.get('https://api.example.com/profile')
settings = session.get('https://api.example.com/settings')

🔧 Configuring Retry Strategies

The requests library can be enhanced with automatic retry capabilities using adapters:

from requests.adapters import HTTPAdapter
from requests.packages.urllib3.util.retry import Retry

session = requests.Session()

retry_strategy = Retry(
    total=3,
    status_forcelist=[429, 500, 502, 503, 504],
    method_whitelist=["HEAD", "GET", "OPTIONS"],
    backoff_factor=1
)

adapter = HTTPAdapter(max_retries=retry_strategy)
session.mount("https://", adapter)
session.mount("http://", adapter)

response = session.get('https://api.example.com/data')

This configuration automatically retries requests that fail with specific status codes, using exponential backoff between attempts.

Real-World Example: Building a Weather Data Fetcher

Let's put everything together in a practical example that demonstrates proper API integration patterns:

import requests
import os
from datetime import datetime

class WeatherAPI:
    def __init__(self, api_key):
        self.api_key = api_key
        self.base_url = 'https://api.openweathermap.org/data/2.5'
        self.session = requests.Session()
        self.session.params = {'appid': self.api_key, 'units': 'metric'}
    
    def get_current_weather(self, city):
        """Fetch current weather for a city with error handling."""
        try:
            response = self.session.get(
                f'{self.base_url}/weather',
                params={'q': city},
                timeout=10
            )
            response.raise_for_status()
            return self._parse_weather_data(response.json())
        
        except requests.exceptions.HTTPError as e:
            if e.response.status_code == 404:
                return {'error': f'City "{city}" not found'}
            return {'error': f'HTTP error: {e}'}
        
        except requests.exceptions.RequestException as e:
            return {'error': f'Request failed: {e}'}
    
    def get_forecast(self, city, days=5):
        """Fetch weather forecast for specified number of days."""
        try:
            response = self.session.get(
                f'{self.base_url}/forecast',
                params={'q': city, 'cnt': days * 8},  # 8 data points per day
                timeout=10
            )
            response.raise_for_status()
            return self._parse_forecast_data(response.json())
        
        except requests.exceptions.RequestException as e:
            return {'error': f'Forecast request failed: {e}'}
    
    def _parse_weather_data(self, data):
        """Extract relevant information from weather response."""
        return {
            'city': data['name'],
            'country': data['sys']['country'],
            'temperature': data['main']['temp'],
            'feels_like': data['main']['feels_like'],
            'humidity': data['main']['humidity'],
            'description': data['weather'][0]['description'],
            'wind_speed': data['wind']['speed'],
            'timestamp': datetime.fromtimestamp(data['dt'])
        }
    
    def _parse_forecast_data(self, data):
        """Parse forecast data into daily summaries."""
        forecasts = []
        for item in data['list']:
            forecasts.append({
                'datetime': datetime.fromtimestamp(item['dt']),
                'temperature': item['main']['temp'],
                'description': item['weather'][0]['description'],
                'humidity': item['main']['humidity']
            })
        return {'city': data['city']['name'], 'forecasts': forecasts}
    
    def close(self):
        """Clean up session resources."""
        self.session.close()

# Usage example
api_key = os.environ.get('OPENWEATHER_API_KEY')
weather = WeatherAPI(api_key)

current = weather.get_current_weather('London')
if 'error' not in current:
    print(f"Current weather in {current['city']}, {current['country']}:")
    print(f"Temperature: {current['temperature']}°C")
    print(f"Feels like: {current['feels_like']}°C")
    print(f"Conditions: {current['description']}")
else:
    print(current['error'])

weather.close()

This example demonstrates several best practices: using environment variables for API keys, creating a reusable class structure, implementing comprehensive error handling, using sessions for efficiency, setting appropriate timeouts, and parsing responses into clean data structures.

Debugging API Requests

When API calls don't work as expected, systematic debugging becomes essential. The requests library provides several tools to inspect what's actually being sent and received.

🔍 Inspecting Request Details

response = requests.get('https://api.example.com/data', params={'key': 'value'})

# View the actual URL that was requested
print("URL:", response.url)

# View request headers
print("Request headers:", response.request.headers)

# View response headers
print("Response headers:", response.headers)

# View raw response content
print("Raw content:", response.content)

# View response encoding
print("Encoding:", response.encoding)

📊 Enabling Detailed Logging

For deep debugging, you can enable HTTP logging to see the complete request-response cycle:

import logging
import http.client as http_client

http_client.HTTPConnection.debuglevel = 1

logging.basicConfig()
logging.getLogger().setLevel(logging.DEBUG)
requests_log = logging.getLogger("requests.packages.urllib3")
requests_log.setLevel(logging.DEBUG)
requests_log.propagate = True

response = requests.get('https://api.example.com/data')
"When debugging API issues, the problem is usually one of three things: authentication, data format, or endpoint URL. Systematic inspection of the actual request being sent eliminates guesswork."

💡 Testing with Mock Responses

During development, you might want to test your code without making real API calls. The responses library allows mocking:

import responses

@responses.activate
def test_api_call():
    responses.add(
        responses.GET,
        'https://api.example.com/data',
        json={'result': 'success'},
        status=200
    )
    
    response = requests.get('https://api.example.com/data')
    assert response.json()['result'] == 'success'

test_api_call()

Security Best Practices

Security considerations are paramount when working with APIs, especially when handling sensitive data or authentication credentials.

🔐 Managing Sensitive Information

Never commit API keys or secrets to version control. Use environment variables or secure configuration management:

# Using environment variables
import os

API_KEY = os.environ.get('API_KEY')
if not API_KEY:
    raise ValueError("API_KEY environment variable not set")

# Using python-dotenv for local development
from dotenv import load_dotenv

load_dotenv()  # Load variables from .env file
API_KEY = os.getenv('API_KEY')

Create a .env file for local development (and add it to .gitignore):

API_KEY=your_secret_key_here
API_SECRET=your_secret_value

🛡️ Validating SSL Certificates

Always verify SSL certificates in production to prevent man-in-the-middle attacks:

# Good: SSL verification enabled (default)
response = requests.get('https://api.example.com/data')

# Bad: SSL verification disabled (never do this in production)
response = requests.get('https://api.example.com/data', verify=False)

# Custom certificate bundle
response = requests.get('https://api.example.com/data', verify='/path/to/certfile')

🚨 Sanitizing User Input

When incorporating user input into API requests, always validate and sanitize to prevent injection attacks:

def search_users(username):
    # Validate input
    if not username.isalnum():
        raise ValueError("Username must be alphanumeric")
    
    if len(username) > 50:
        raise ValueError("Username too long")
    
    params = {'username': username}
    response = requests.get('https://api.example.com/users/search', params=params)
    return response.json()

Performance Optimization Strategies

As your application scales, optimizing API interactions becomes crucial for maintaining responsiveness and managing costs.

⚡ Implementing Caching

Caching reduces unnecessary API calls for data that doesn't change frequently:

import time

class CachedAPI:
    def __init__(self):
        self.cache = {}
        self.cache_duration = 300  # 5 minutes
    
    def get_data(self, endpoint):
        now = time.time()
        
        # Check if cached data exists and is still valid
        if endpoint in self.cache:
            data, timestamp = self.cache[endpoint]
            if now - timestamp < self.cache_duration:
                return data
        
        # Fetch fresh data
        response = requests.get(endpoint)
        data = response.json()
        
        # Store in cache
        self.cache[endpoint] = (data, now)
        return data

api = CachedAPI()
data = api.get_data('https://api.example.com/data')

🔄 Asynchronous Requests

When making multiple independent API calls, asynchronous requests can dramatically improve performance:

import asyncio
import aiohttp

async def fetch_url(session, url):
    async with session.get(url) as response:
        return await response.json()

async def fetch_multiple_urls(urls):
    async with aiohttp.ClientSession() as session:
        tasks = [fetch_url(session, url) for url in urls]
        results = await asyncio.gather(*tasks)
        return results

urls = [
    'https://api.example.com/data1',
    'https://api.example.com/data2',
    'https://api.example.com/data3'
]

results = asyncio.run(fetch_multiple_urls(urls))

📦 Batch Requests

Some APIs support batch operations that allow multiple requests in a single call:

batch_request = {
    'requests': [
        {'method': 'GET', 'path': '/users/1'},
        {'method': 'GET', 'path': '/users/2'},
        {'method': 'GET', 'path': '/users/3'}
    ]
}

response = requests.post('https://api.example.com/batch', json=batch_request)
results = response.json()['responses']

Handling Webhooks and Callbacks

Many APIs use webhooks to push data to your application rather than requiring constant polling. Setting up a simple webhook receiver requires creating a web endpoint:

from flask import Flask, request, jsonify

app = Flask(__name__)

@app.route('/webhook', methods=['POST'])
def webhook_handler():
    # Verify webhook signature (if provided by the API)
    signature = request.headers.get('X-Webhook-Signature')
    
    if not verify_signature(signature, request.data):
        return jsonify({'error': 'Invalid signature'}), 401
    
    # Process webhook data
    data = request.json
    print(f"Received webhook: {data}")
    
    # Perform actions based on webhook data
    process_webhook_data(data)
    
    return jsonify({'status': 'success'}), 200

def verify_signature(signature, payload):
    # Implement signature verification based on API documentation
    pass

def process_webhook_data(data):
    # Handle the webhook payload
    pass

if __name__ == '__main__':
    app.run(port=5000)

Working with GraphQL APIs

While REST APIs are most common, GraphQL APIs are increasingly popular. They use a different approach where you specify exactly what data you need:

query = """
query {
    user(id: "123") {
        name
        email
        posts {
            title
            createdAt
        }
    }
}
"""

response = requests.post(
    'https://api.example.com/graphql',
    json={'query': query},
    headers={'Authorization': 'Bearer your_token'}
)

data = response.json()
user = data['data']['user']

For more complex GraphQL interactions, dedicated libraries like gql provide better tooling:

from gql import gql, Client
from gql.transport.requests import RequestsHTTPTransport

transport = RequestsHTTPTransport(
    url='https://api.example.com/graphql',
    headers={'Authorization': 'Bearer your_token'}
)

client = Client(transport=transport, fetch_schema_from_transport=True)

query = gql("""
    query GetUser($userId: ID!) {
        user(id: $userId) {
            name
            email
        }
    }
""")

result = client.execute(query, variable_values={'userId': '123'})

Monitoring and Logging API Usage

Production applications need comprehensive logging to troubleshoot issues and monitor API usage patterns:

import logging
from datetime import datetime

logging.basicConfig(
    level=logging.INFO,
    format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
    handlers=[
        logging.FileHandler('api_requests.log'),
        logging.StreamHandler()
    ]
)

logger = logging.getLogger(__name__)

class LoggedAPI:
    def __init__(self, base_url):
        self.base_url = base_url
        self.session = requests.Session()
    
    def request(self, method, endpoint, **kwargs):
        url = f"{self.base_url}{endpoint}"
        start_time = datetime.now()
        
        try:
            response = self.session.request(method, url, **kwargs)
            duration = (datetime.now() - start_time).total_seconds()
            
            logger.info(
                f"{method} {url} - Status: {response.status_code} - "
                f"Duration: {duration:.2f}s"
            )
            
            response.raise_for_status()
            return response
        
        except requests.exceptions.RequestException as e:
            duration = (datetime.now() - start_time).total_seconds()
            logger.error(
                f"{method} {url} - Error: {str(e)} - Duration: {duration:.2f}s"
            )
            raise

api = LoggedAPI('https://api.example.com')
response = api.request('GET', '/users/123')
What is the difference between requests.get() and requests.post()?

The GET method retrieves data from a server without modifying anything, typically used for reading information. The POST method sends data to the server to create new resources or submit information. GET requests include parameters in the URL, while POST requests send data in the request body. Use GET for retrieving data and POST when you need to send data that creates or modifies resources on the server.

How do I handle API rate limits in Python?

Handle rate limits by checking response headers for rate limit information (like X-RateLimit-Remaining), implementing client-side rate limiting with time delays between requests, catching 429 status code errors, and implementing exponential backoff retry logic. You can create a rate limiter class that tracks request timestamps and automatically waits when approaching limits. Many production applications also cache responses to reduce the number of API calls needed.

What's the best way to store API keys securely?

Never hardcode API keys in your source code or commit them to version control. Use environment variables to store keys outside your codebase, utilize the python-dotenv library for local development with .env files (added to .gitignore), use secure secret management services like AWS Secrets Manager or HashiCorp Vault for production environments, and implement proper access controls. For team projects, use configuration management tools and document where secrets should be stored without including the actual values.

How can I make my API requests faster?

Improve API request performance by using Session objects to reuse connections, implementing caching for frequently accessed data that doesn't change often, making asynchronous requests with libraries like aiohttp when fetching multiple independent resources, using batch API endpoints when available, enabling compression in requests, and minimizing the data requested by using field selection parameters if the API supports them. Connection pooling and proper timeout configuration also contribute to better performance.

What should I do when an API request fails?

Implement comprehensive error handling by catching specific exception types (HTTPError, ConnectionError, Timeout), checking response status codes and handling different error types appropriately, implementing retry logic with exponential backoff for transient failures, logging detailed error information including request parameters and response content, providing meaningful error messages to users, and having fallback mechanisms or cached data when possible. Always validate responses before processing and use try-except blocks around API calls.

How do I work with paginated API responses?

Handle pagination by identifying the pagination method used (offset-based, page-based, or cursor-based), implementing loops that continue fetching until no more data is available, tracking pagination metadata from responses, and accumulating results across multiple requests. Check for pagination indicators like next_page, has_more, or next_cursor in responses. Be mindful of rate limits when fetching large paginated datasets and consider implementing progress tracking for long-running operations.