Skip to content

Exploratory Data Analysis

Exploratory Data Analysis (EDA) is a critical first step in any data mining project. For frequent itemset mining, EDA helps us understand the dataset characteristics, identify preprocessing requirements, and make informed decisions about algorithm parameters such as minimum support thresholds.

The experiments use the Amazon Reviews 2023 dataset from Hugging Face, which contains product reviews across multiple categories:

  • Source: Hugging Face Datasets - McAuley-Lab/Amazon-Reviews-2023
  • Categories Used: Appliances, Digital Music, Health and Personal Care, Handmade Products, All Beauty
  • Total Records: Over 4 million review records across all categories
  • Data Format: JSONL (JSON Lines) format with one review per line

The dataset contains the following key features (variables):

  • user_id (string): Unique identifier for each user/reviewer
  • asin (string): Amazon Standard Identification Number - unique product identifier
  • parent_asin (string, nullable): Parent product identifier for product variants
  • category (string): Product category (when combining multiple categories)
  • verified_purchase (boolean): Whether the purchase was verified
  • overall (float): Rating score (typically 1-5)
  • helpful (list): Helpful vote counts
  • unix_review_time (integer): Unix timestamp of review
  • review_text (string): Review content
  • summary (string): Review summary

The EDA process reveals key statistics about the dataset:

  • Total Records: 4,118,850 reviews (combined categories)
  • Verified Purchases: Percentage varies by category, typically 60-80% of reviews
  • Unique Users: Hundreds of thousands of unique reviewers
  • Unique Products (ASIN): Tens of thousands of individual products
  • Unique Product Groups (Parent ASIN): Fewer than individual ASINs, enabling product grouping

After converting reviews to transactions (grouping by user):

  • Total Transactions: Varies based on preprocessing parameters
  • Average Transaction Size: Typically 2-3 items per user
  • Transaction Size Range: From 1 item to dozens of items
  • Unique Items: Thousands to tens of thousands depending on category and grouping strategy

The dataset contains several types of missing values:

  1. parent_asin: Many products don’t have a parent ASIN (null values)

    • Handling: Fallback to asin when parent_asin is null
    • Impact: Affects product grouping strategy
  2. verified_purchase: Some records may have null values

    • Handling: Filtered to only include verified purchases (verified_purchase == True)
    • Rationale: Ensures data quality and reduces noise from unverified reviews
  3. user_id: Rarely missing, but critical for transaction creation

    • Handling: Records with missing user_id are excluded from transaction creation

The EDA process generates comprehensive visualizations to understand data patterns:

Data Exploration

Comprehensive data exploration visualization showing dataset statistics, user/product patterns, transaction configurations, item frequency analysis, and transaction size distributions.

  • Purpose: Understand how items are distributed across transactions
  • Insights:
    • Most items appear infrequently (long tail distribution)
    • Few items appear in many transactions (power law distribution)
    • Helps determine appropriate minimum support thresholds
  • Purpose: Understand the distribution of items per transaction
  • Insights:
    • Most transactions contain 1-5 items
    • Average transaction size guides preprocessing decisions
    • Helps set min_transaction_size parameter
  • Purpose: Determine how many frequent itemsets would be found at various support levels
  • Insights:
    • Shows exponential decay as support threshold increases
    • Helps select appropriate minimum support for algorithm execution
    • Critical for balancing computational cost vs. result completeness
  • Purpose: Understand user purchasing behavior
  • Insights:
    • Most users make few purchases (1-2)
    • Few users are highly active (power users)
    • Affects transaction creation strategy
  • Purpose: Compare product granularity strategies
  • Insights:
    • Parent ASIN grouping reduces item count
    • Parent ASIN creates more meaningful product associations
    • Helps decide between individual products vs. product groups
  • Purpose: Identify most popular products
  • Insights:
    • Reveals best-selling or frequently reviewed products
    • Helps understand domain-specific patterns
    • Useful for result interpretation

The dataset exhibits strong power law characteristics:

  • Item Frequency: Most items appear in very few transactions, while a small number of items appear in many transactions
  • User Activity: Most users make few purchases, while a small number of users are highly active
  • Implication: Low minimum support thresholds are necessary to capture meaningful patterns
  • Observation: Average transaction size is small (2-3 items)
  • Implication:
    • High-dimensional sparse data
    • Traditional Apriori may generate many candidates
    • FP-Growth may be more efficient due to tree compression
  • Observation: Using parent_asin instead of asin:
    • Reduces unique item count significantly
    • Creates more meaningful associations (product variants grouped together)
    • Increases average transaction size
  • Implication: Better for association rule mining as it captures product family relationships
  • Observation: Small changes in support threshold dramatically affect number of frequent itemsets
  • Example: Reducing support from 1% to 0.5% may double the number of frequent itemsets
  • Implication: Careful threshold selection is critical for algorithm performance
  • Pattern: Users typically purchase products within similar categories
  • Relevance: Suggests category-based analysis may reveal stronger patterns
  • Application: Can be used for recommendation systems
  • Pattern: Certain products frequently appear together in transactions
  • Relevance: Core of association rule mining
  • Application: Market basket analysis, product recommendations
  • Pattern: Purchase patterns may vary over time (not explored in detail)
  • Relevance: Could inform time-sensitive association rules
  • Application: Seasonal product recommendations

EDA directly informs preprocessing decisions:

  1. Minimum Transaction Size:

    • Decision: Use min_transaction_size=2 (transactions must have at least 2 items)
    • Rationale: Single-item transactions don’t contribute to association rules
    • Impact: Reduces transaction count but improves quality
  2. Product Grouping Strategy:

    • Decision: Use parent_asin when available, fallback to asin
    • Rationale: Captures product family relationships, reduces sparsity
    • Impact: More meaningful association rules, better algorithm performance
  3. Minimum Support Threshold:

    • Decision: Use very low thresholds (0.05% - 0.5%) for comprehensive analysis
    • Rationale: Power law distribution means most items are infrequent
    • Impact: Balances completeness vs. computational cost
  4. Infrequent Item Filtering:

    • Decision: Filter items appearing in fewer than 3 transactions
    • Rationale: Removes noise, reduces computational overhead
    • Impact: Faster algorithm execution, cleaner results

The EDA process is implemented in the exploration.ipynb notebook and uses the following tools:

  • Polars: Efficient DataFrame operations for large datasets
  • NumPy: Statistical calculations
  • Matplotlib/Seaborn: Visualization
  • Collections: Frequency counting and data structures

The preprocessing module (preprocessing.py) provides utility functions for EDA:

  • filter_verified_purchases(): Filter dataset to verified purchases only
  • create_user_carts(): Group products by user to create transactions
  • get_transaction_stats(): Calculate transaction-level statistics
  • calculate_item_frequencies(): Count item frequencies across transactions
  • suggest_min_support(): Programmatically suggest minimum support thresholds
from preprocessing import (
filter_verified_purchases,
create_user_carts,
get_transaction_stats,
calculate_item_frequencies
)
# Load dataset
data = load_jsonl_dataset_polars(url)
# Filter verified purchases
verified_data = filter_verified_purchases(data)
# Create user carts (transactions)
user_carts = create_user_carts(verified_data, use_parent_asin=True)
# Calculate statistics
transactions = [list(cart) for cart in user_carts.values()]
stats = get_transaction_stats(transactions)
item_freq = calculate_item_frequencies(transactions)
# Analyze results
print(f"Total transactions: {stats['num_transactions']}")
print(f"Unique items: {stats['num_unique_items']}")
print(f"Average transaction size: {stats['avg_transaction_size']:.2f}")

Exploratory Data Analysis provides critical insights that guide the entire frequent itemset mining process. By understanding data distributions, relationships, and patterns, we can:

  • Make informed preprocessing decisions
  • Select appropriate algorithm parameters
  • Interpret results meaningfully
  • Optimize algorithm performance

The EDA process reveals that the Amazon Reviews dataset is characterized by high sparsity, power law distributions, and small transaction sizes—all factors that influence algorithm selection and parameter tuning for frequent itemset mining.