data-analysissql-generationllm-analyticscode-executiontutorial

How to Use LLMs for Data Analysis in 2026: Patterns and Pitfalls

How to Use LLMs for Data Analysis in 2026: Patterns and Pitfalls

Quick answer: LLMs excel at translating natural language to SQL, explaining datasets, generating analysis code, and summarizing results. They struggle with large numeric computations, statistical testing, and tasks requiring exact arithmetic. Use LLMs as the interface layer; use traditional tools (pandas, SQL engines, R) for the actual computation.


Pattern 1: Text-to-SQL

The highest-value LLM application in data analytics. Give the model your schema and a natural language question; get back a SQL query.

def text_to_sql(schema: str, question: str) -> str:
    response = client.messages.create(
        model="claude-sonnet-4",
        max_tokens=1024,
        system=f"""You are a SQL expert. Convert natural language questions to SQL queries.
        
Database schema:
{schema}

Rules:
- Return only the SQL query, no explanation
- Use standard SQL (compatible with PostgreSQL)
- Always limit results to 100 rows unless the user specifies otherwise""",
        messages=[{"role": "user", "content": question}]
    )
    return response.content[0].text

schema = """
table: orders (id, customer_id, amount, status, created_at)
table: customers (id, name, email, plan, created_at)
"""

query = text_to_sql(schema, "Show me the top 10 customers by revenue this month")
print(query)

Always validate generated SQL before executing against production databases. Run EXPLAIN on generated queries and check for common issues: missing WHERE clauses, cartesian joins, or overly expensive table scans.


Pattern 2: Dataset insight extraction

import pandas as pd

def analyze_dataset(df: pd.DataFrame, question: str) -> str:
    # Prepare a description instead of passing raw data
    description = f"""
Dataset shape: {df.shape}
Columns: {df.dtypes.to_string()}
Summary statistics:
{df.describe().to_string()}
Sample (5 rows):
{df.head().to_string()}
Null counts:
{df.isnull().sum().to_string()}
"""
    response = client.messages.create(
        model="claude-sonnet-4",
        max_tokens=2048,
        messages=[{
            "role": "user",
            "content": f"{description}\n\nQuestion: {question}"
        }]
    )
    return response.content[0].text

df = pd.read_csv("sales.csv")
insight = analyze_dataset(df, "What are the three most important trends in this data?")

Key pattern: Never pass raw data rows to LLMs for numeric analysis. Pass statistical summaries. LLMs are not calculators — they make arithmetic errors on raw numbers. Compute the numbers, then ask LLMs to interpret them.


Pattern 3: Code generation for analysis

def generate_analysis_code(schema: str, analysis_request: str) -> str:
    response = client.messages.create(
        model="claude-sonnet-4",
        max_tokens=2048,
        system="""Generate Python code using pandas and matplotlib for data analysis.
        Always include: data validation, error handling, clear variable names, and a print summary at the end.
        Return only executable Python code.""",
        messages=[{"role": "user", "content": f"""
        Dataset schema: {schema}
        Analysis request: {analysis_request}
        """}]
    )
    return response.content[0].text

code = generate_analysis_code(
    "orders.csv with columns: date, revenue, channel, region",
    "Calculate month-over-month revenue growth by channel and plot it"
)


Pattern 4: Report generation from structured results

def generate_report(metrics: dict, context: str) -> str:
    response = client.messages.create(
        model="claude-haiku-4",  # Cheaper model is fine for templated reports
        max_tokens=1500,
        messages=[{
            "role": "user",
            "content": f"""
Write a concise executive summary (3-4 paragraphs) for a monthly business review.

Context: {context}

Key metrics:
{json.dumps(metrics, indent=2)}

Focus on: trends, anomalies, and 2-3 specific recommendations."""
        }]
    )
    return response.content[0].text


What LLMs are bad at in data analysis

Arithmetic on raw numbers: LLMs can make errors on multi-step calculations. Always compute numbers with proper tools, then pass results to LLMs for interpretation.

Statistical testing: Don't ask an LLM to run a t-test or determine statistical significance. Use scipy, R, or statsmodels and have the LLM explain the output.

Large dataset context: Passing 1,000 rows of CSV data to an LLM is expensive and often produces worse results than passing computed summaries.

Exact filtering and aggregation: SQL is better than natural language for precise data retrieval. Use LLMs to generate the SQL, not to reason about raw data.

For the best models for data analysis tasks, see best LLM for data analysis and best LLM for SQL.

Your ad here

Related Tools