Chapter 18: AI and Data Analysis for Decision Making
- Zack Edwards
- Nov 26
- 35 min read
My Name is John Tukey: Explorer of Data and Quiet Revolutionary in Statistics
My name is John Tukey, and I was born in 1915 in New Bedford, Massachusetts, into a world that had not yet learned to appreciate the hidden stories buried inside numbers. My parents were educators who filled our home with books, puzzles, and challenges. From an early age, I learned that thinking was not something to fear but something to enjoy. I did not yet know the term “data exploration,” but I was already living it—questioning, noticing patterns, and refusing to take any answer at face value.

Becoming a Mathematician in an Uncertain World
When I entered university, mathematics felt like a natural home. It was the one field where asking questions was not only allowed but required. My doctoral work brought me into the complex world of topology, but the world itself was changing faster than my equations. The Second World War pulled me into an entirely new direction: applied mathematics at Princeton and later statistical research for the U.S. government. That experience taught me how essential good data could be—not only to soldiers’ safety but to national security itself.
Where Data Cleaning Begins
The greatest surprise in my wartime work was discovering how often numbers lied, not because they wished to deceive but because nobody had prepared them to speak clearly. Data arrived messy, inconsistent, incomplete. I realized that before one could analyze anything, one must first clean it. Data cleaning, to me, was an act of respect—ensuring the numbers told the truth of the world instead of a distorted version of it. This belief followed me for the rest of my life and became the first step in what I would later call “Exploratory Data Analysis.”
Inventing a New Way to See Data
After the war, as I joined Princeton University and later Bell Labs, I found myself surrounded by brilliant minds building the foundations of computing and communication. What I contributed was a new way to look at data. Instead of treating statistics as a rigid set of rules, I treated it as an adventure. Data should be explored, not merely calculated. It should be visualized, questioned, and compared. That is how I came to invent the boxplot, a simple graphic that reveals distributions, outliers, and medians at a glance. It was not elegant for elegance’s sake; it was practical, a lantern to help others navigate dark and confusing datasets.
Statistical Thinking and the Power of Asking Why
Many believed statistical thinking meant memorizing formulas. I believed it meant learning to ask why. Why is this number larger than that one? Why does a pattern appear here and not there? Why do we trust a result? I taught students that good statistical thinking was not mechanical—it was human. It required intuition, creativity, and a willingness to ask uncomfortable questions. In my work at Bell Labs, this mindset helped guide early developments in computing, from fast Fourier transforms to exploratory algorithms that made data more accessible and insightful.
The Birth of Exploratory Data Analysis
In 1977, I put my thoughts into a book called Exploratory Data Analysis. It was not a book of recipes or formulas but a declaration of independence from rigid mathematical thinking. I urged readers to look, really look, at their data before leaping into assumptions. Draw pictures. Compare values. Notice anomalies. Treat data not as a burden but as a partner in discovery. My ideas helped lay the groundwork for modern data science, long before that term existed.
A Life Close to Curiosity Until the End
Looking back, I never chased fame—I chased understanding. Whether helping coin the word “bit” or advising on early computer development, my greatest joy came from making complex ideas simpler and helping others see what lay hidden in plain sight. My life’s work taught me that the world is full of patterns waiting for patient minds to uncover them. The tools have changed—now AI systems explore data in ways I could hardly have imagined—but the heart of the work remains the same: clean the data, explore the data, and let the data speak.
How AI Turns Raw Data Into Actionable Insights – Told by John Tukey
When we speak of raw data, we speak of a world that has yet to be understood. In my own work, I learned early that numbers rarely arrive neat and tidy. They come as fragments of stories, scattered observations, or incomplete records. What AI models do today is what I once trained young analysts to do by hand: gather the data, read it closely, and prepare it so it can speak clearly. AI begins by identifying what is noise and what is meaningful structure. It organizes, groups, and reformats the pieces so the real analysis can begin.

Recognizing Patterns Without Prejudice
Once the data is cleaned and prepared, AI engages in what I would have called exploratory work: looking for patterns without assuming what those patterns must be. The model examines relationships—numbers that rise together, variables that move in opposite directions, or clusters that form around shared characteristics. Where I once used early visualization tools, AI now evaluates thousands of possible relationships at once. It hunts for structure in a way that preserves curiosity, recognizing that the first pattern may not be the important one, and the last may hold the key.
Turning Patterns Into Understanding
Patterns alone are not insights. They are simply shapes in the fog. The next step is interpretation. AI models like ChatGPT ADA compare the discovered patterns to known contexts. If sales rise each spring, the model can label that as seasonality. If two variables move in sync, it can suggest correlation. If one group behaves differently than the rest, it can identify segmentation or an anomaly worth investigating. This is where the model moves from merely observing to explaining, much like guiding a student across the bridge from description to meaning.
Transforming Understanding Into Action
An insight earns its name only when it leads to action. AI provides suggestions: reduce this cost here, increase this effort there, monitor this trend closely, or test a specific hypothesis. These recommendations come from evaluating the pattern in the context of goals. If the data shows a decline in engagement, the model may recommend a new approach and even help design an experiment to test it. The sequence is familiar to me: data leads to hypothesis, hypothesis leads to action, and action leads to learning. AI simply hastens the journey.
The Workflow That Makes Decisions Smarter
The path from raw data to decision is not a mystery. It follows a rhythm that I often taught to my students, though AI now performs it at a scale and speed unimaginable in my day. First comes the data, messy but full of potential. Then the discovery of patterns, found through careful and unbiased exploration. After that comes the insight, where meaning is attached to structure. Finally comes the action, the point where knowledge changes behavior. This workflow—data to patterns to insights to decisions—remains the foundation of good analysis. AI has refined it, accelerated it, and made it accessible to those without years of statistical training.
A Future Full of Curious Machines
If there is one lesson I would want students to keep, it is that exploration lies at the heart of understanding. AI tools are not replacing this principle; they are extending it. They sift, search, and highlight in ways that encourage deeper questions. They help humans see what was always there but often overlooked. The insights produced by AI are not magic; they are the result of disciplined curiosity, carried out by machines that follow the same path I once traced by hand. That path remains the key to unlocking truth from the chaos of raw data.
Fundamentals of Data Cleaning and Preparation – Told by John Tukey
Fundamentals of data cleaning and preparation is a subject close to my heart, because it reminds us that analysis does not begin with elegant charts or polished conclusions. It begins with disorder. When students open a spreadsheet or load a dataset, they often discover missing entries, inconsistent spellings, duplicated records, and numbers that simply do not belong. Before any exploration or modeling can occur, the data must be made trustworthy. AI tools such as ChatGPT ADA, Excel Copilot, and Google Sheets automate much of this early work, but the principles behind it remain the same as they were in my day: you cannot build insight on a shaky foundation.

Identifying and Removing Duplicates
One of the first tasks in cleaning is finding records that appear more than once. Duplicate entries can distort averages, inflate counts, or create the illusion of patterns that do not exist. Modern tools make this step approachable. Google Sheets can highlight duplicate rows; Excel Copilot can detect them automatically; and ChatGPT ADA can scan large tables to summarize where repetition occurs. Removing duplicates is not merely housekeeping—it is the first act of respecting the integrity of the dataset.
Fixing Errors Before They Mislead You
Errors often hide in plain sight. A number typed incorrectly, a date in the wrong format, or a category spelled differently than the rest can derail an entire analysis. AI tools now assist by detecting irregularities and suggesting corrections. Excel Copilot can flag values that deviate sharply from the norm. Google Sheets formulas can standardize dates or identify non-numeric entries. ChatGPT ADA can even describe the kinds of errors present and propose strategies to repair them. This step is critical because errors tend to compound; one mistake left uncorrected can shape every conclusion that follows.
Standardizing Formats for Consistency
Data becomes far easier to analyze when all entries follow the same rules. This might mean converting all dates into one format, ensuring categories are labeled consistently, or aligning numerical values to the same units. AI tools excel here. Sheets can apply formatting rules across entire columns. Excel Copilot can transform text into consistent case or split combined values into clear fields. ChatGPT ADA can generate formulas or scripts to standardize formats across thousands of entries. Standardization allows every piece of data to be compared fairly, without hidden inconsistencies distorting the results.
Cleaning as an Act of Understanding
It is tempting to see cleaning as a chore, but I urge students to approach it as part of the analysis itself. When you clean data, you learn its structure, its weaknesses, and its peculiarities. You discover what is typical, what is unusual, and what may need further investigation. AI can accelerate this process, but it cannot replace the analyst’s understanding. Students should use AI as a guide, observing what the tools highlight and reflecting on why those features matter.
Preparing Data for Honest Exploration
Once duplicates are removed, errors corrected, and formats standardized, the dataset becomes reliable. Now the analyst can explore, visualize, and model with confidence. AI tools have made this preparatory work faster than anything I could have imagined when I first proposed exploratory data analysis, but the core principle holds: preparation is not separate from analysis; it is the beginning of it. Clean data allows patterns to emerge naturally and truths to reveal themselves without distortion.
A Foundation for All Future Insight
Proper data cleaning may not feel glamorous, but it is the essential first step toward sound decision-making. Whether a student uses ChatGPT ADA to diagnose inconsistencies, Excel Copilot to correct errors, or Google Sheets to unify formats, the goal is always the same: create a dataset worthy of analysis. Only then can the insights that follow be trusted. In this way, cleaning is not a background task—it is the quiet but indispensable craft that supports every meaningful discovery.
Building Dashboards That Tell a Story
Building dashboards that tell a story begins long before a single chart appears on the screen. When I sit down to create one, the very first thing I ask is simple: what decision do I want someone to make after seeing this? A dashboard is not decoration. It is a tool that gathers the most important pieces of information and arranges them so the viewer understands what matters, what’s changing, and what they should do next. Whether I’m helping a small business owner track sales or guiding students through data literacy, the design process always begins with purpose.

Choosing the Tools That Bring the Story to Life
With purpose in mind, I turn to the tools that make dashboards powerful. Tableau AI offers smart suggestions and visual transformations that can reveal trends with remarkable clarity. Power BI Copilot helps build interactive tiles that update dynamically as more data is added. Coefficient.ai connects Google Sheets directly into dashboards so real-time changes flow seamlessly into the display. Each tool has its strengths, but they all share a single strength: they help transform rows of numbers into something the human mind can grasp quickly.
Designing for Clarity, Not Complexity
Once the tool is chosen, I focus on how to guide the viewer’s eyes through the story. A dashboard should not overwhelm with color or noise. It should highlight what is essential: the key metric, its direction, and any comparisons that help explain the trend. I often begin with a big number at the top, representing the primary metric—monthly sales, engagement rate, customer retention, or anything tied to the user’s goal. Beneath it, I place a trend line to show how that number is changing. To the side, I place comparison charts that reveal why it is changing. It is not the number alone that matters; it is the narrative it forms with its supporting cast.
Mini-Project: The Classroom Sales Tracker
A simple project I often use with students involves creating a classroom “sales tracker” dashboard. They imagine they’re running a small lemonade stand or classroom store and track costs, revenue, and profit over several weeks. Using Coefficient.ai, they link a Google Sheet of their recorded data to a dashboard that updates instantly. Tableau AI can suggest color-coded visuals that show profits rising or falling. Power BI Copilot can help them compare this week’s performance to last week’s. In a single dashboard, they can see how decisions—pricing, advertising, or inventory—affect the business.
Mini-Project: The Social Engagement Monitor
Another project involves tracking social media engagement for a hypothetical brand. Students gather likes, comments, shares, and views from several posts and load them into a sheet. Tableau AI helps them build a dashboard that highlights the top-performing posts and the days with the highest activity. Power BI Copilot can create an interactive feature that filters posts by content type or time of day. The lesson is simple but powerful: storytelling with data is not just for analysts; it’s for anyone who wants to understand their audience.
Letting the Dashboard Guide Decisions
A dashboard earns its value when it drives action. If the top chart shows declining engagement, the lower charts must explain why. Perhaps one content type is underperforming. Perhaps the best posting days aren’t being used. Perhaps last month’s experiment—posting more video—shows a clear improvement. By seeing these relationships visually, students learn that data is not an obstacle but a guide. AI-supported tools make the patterns clearer, but the decision-making remains proudly human.
Refining the Story Over Time
No dashboard is perfect on its first day. As more data arrives or the user’s goals shift, dashboards need to evolve. Tableau AI can suggest new visuals when trends change. Power BI Copilot can restructure entire layouts based on newly added metrics. Coefficient.ai can expand the data source as the project grows. The dashboard becomes a living story, updated every day by the choices the user makes and the results they track.
A Dashboard’s Story Is a Pathway to Understanding
In every project I’ve built or taught, I’ve learned that the best dashboards are those that help someone understand their world at a glance. They turn confusion into clarity and hesitation into action. With tools like Tableau AI, Power BI Copilot, and Coefficient.ai, today’s learners can create dashboards that once required a full team of specialists. And more importantly, they learn that data becomes meaningful only when it tells a story—one that empowers better choices, smarter experimentation, and confident decisions.
My Name is W. Edwards Deming: Teacher of Quality, Statistics, and Leadership
My name is W. Edwards Deming, and I was born in 1900 in Sioux City, Iowa, into a world on the brink of rapid industrial expansion. My childhood was shaped by long days on my family’s farm in Wyoming, where nothing was wasted, every decision mattered, and every problem required a practical solution. I did not yet know the language of data-driven decision making, but I lived its principles every day: observe closely, measure thoughtfully, and act based on evidence rather than impulse.

From Engineering to the Heart of Statistics
When I left home to study electrical engineering, I found myself drawn more to the patterns behind systems than to the machines themselves. Statistics offered a language for understanding variation, quality, and prediction. I pursued it with determination, eventually earning a doctorate in mathematical physics. My early work with the U.S. Department of Agriculture and the U.S. Census Bureau taught me the power of systematic measurement. When you measure with care and analyze with discipline, the world reveals its hidden structure.
Entering the Era of Data-Driven Decisions
During the Second World War, I trained thousands of engineers, managers, and line workers in statistical methods to improve wartime production. I taught that decisions should not be based on hunches or authority but on data—on the cold, clear facts. Fact-based decisions were not fashionable in business at the time; most leaders relied on intuition or tradition. But I pressed forward, convinced that if people learned to understand variation, identify root causes, and measure outcomes, they could transform any system they touched.
The Power of Predictive Models
After the war, I traveled to Japan, where my ideas found a willing audience. Japan was rebuilding from devastation, and its leaders sought not just to regain stability but to redefine quality itself. They listened as I explained that predictive models are not crystal balls—they are tools for understanding how systems behave. By analyzing past variation, one can forecast future outcomes. By testing small changes, one can foresee which decisions lead to improvement and which lead to decline. The Japanese embraced this thinking, creating quality circles, statistical control charts, and processes where data guided every action. Their rise from postwar hardship to industrial leadership stands as one of the greatest success stories of evidence-based decision making.
A/B Testing Before It Had a Name
Long before modern marketers used the term “A/B testing,” I taught companies to test ideas in controlled and measurable ways. Try one method here, another method there. Measure both. Compare results. Learn from variation. Improve the system. This wasn’t merely a technique—it was a philosophy. Experimentation should be continuous, and learning should be built into the very structure of the organization. I called this approach the Plan–Do–Study–Act cycle, a framework that lives on today in every industry that values iterative improvement.
A Philosophy That Reshaped Modern Management
My career unfolded not as a technician but as a teacher of principles. I taught that most problems come from the system, not the worker. I argued that cooperation outperforms competition, and that true leadership means guiding with knowledge rather than fear. Underneath these ideas lay a simple truth: data allows us to see reality as it is, not as we assume it to be. And once you truly see, you can improve.
The Global Legacy of Data-Driven Excellence
As I traveled the world, advising companies and teaching executives, I watched the seeds of statistical thinking grow into forests of continuous improvement. My work influenced manufacturing, healthcare, education, government, and countless other fields. I lived long enough to witness industries transformed by ideas that were once dismissed as radical. Today, data-driven decisions, predictive analytics, and systematic experimentation form the backbone of modern quality and business strategy. These methods now reach into the digital age, carried further by machines capable of analyzing data at unimaginable speeds.
A Life Committed to Learning and Improvement
Looking back, my purpose was never to impose rules but to teach a way of thinking. Systems can be improved. People can learn. Data can guide us. Improvement is infinite. I believed in the dignity of work and the potential of every person to contribute to quality when given the right tools. Though the world has changed, the principles remain. The future belongs to those who measure carefully, think statistically, test boldly, and lead with knowledge.
Predictive Analytics for Beginners – Told by W. Edwards Deming
Predictive analytics for beginners begins with a truth I spent much of my career teaching: the future is not guessed; it is understood through patterns of the past. When I speak of prediction, I am not speaking of fortune-telling but of disciplined learning. Every system leaves traces of its behavior—its rises, its dips, its seasons, and its irregularities. Predictive analytics takes these traces, organizes them, and turns them into a model of what is likely to happen next. AI tools such as ChatGPT ADA or Google Sheets simply accelerate this process, offering students the ability to build forecasts without needing years of statistical training.

Finding Trends Hidden in the Noise
The first lesson in forecasting is learning to distinguish variation. Some variation is natural and random; some is meaningful and directional. AI tools can help identify these differences. A student can load a small dataset into Sheets, such as weekly sales, and ask ADA to explain the underlying trend. The model will separate the short-term fluctuations from the long-term direction. When the trend clearly rises or falls over time, we can treat it as a genuine signal rather than a temporary bump. This step teaches students that prediction begins with respect for the structure of data.
Recognizing Seasonality and Cycles
Many systems are shaped by the calendar. Businesses sell more during holidays, farmers harvest in certain months, and students study harder near exam season. This repeating pattern is seasonality. AI models can detect it automatically by comparing each period to past periods. A student working in Sheets might select “Forecast” or ask ADA directly: “Identify seasonal patterns in this dataset.” When the model reveals peaks and valleys occurring at regular intervals, beginners can immediately see why forecasts must account for time, not just numbers. A good prediction recognizes that the future follows rhythms.
Building Simple Sales Projections
To understand forecasting, students benefit from a practical exercise. Imagine they have monthly sales for a small shop or classroom fundraiser. They upload the numbers into Sheets and ask ADA: “Create a three-month sales forecast using this data.” The model produces a projection based on past trends and seasonality. Power BI Copilot or similar tools can display the results on a chart, showing the historical line and the predicted extension. When students compare actual sales in the following weeks, they learn firsthand how prediction guides planning and how improvement comes from monitoring accuracy.
Using AI Prompts to Support Analysis
Prompts make predictive analytics accessible. Students can ask ADA: “Explain the trend in this dataset,” “Show the next five predicted values,” or “Identify any unusual points that might distort the forecast.” These questions activate analytical routines that once required complex formulas. The model might highlight a sudden spike that should be excluded or explain that recent growth suggests stronger future performance. Through prompting, students learn that prediction is not a black box—it is a conversation with the data, guided by curiosity and informed by structure.
Moving From Forecasts to Decisions
Prediction becomes valuable only when it guides action. If the model forecasts rising demand, a business may increase production. If a downward trend appears, it may investigate causes or test improvements. AI tools support this by allowing students to compare scenarios. For example, they can ask ADA to simulate what happens if prices rise, if production costs fall, or if marketing increases. These scenario projections help students understand the consequences of choices before they commit to them. Forecasting, then, is not merely about seeing ahead; it is about preparing for what lies ahead.
Improving Forecasts Through Continuous Learning
No predictive model is perfect. As new data arrives, forecasts must be updated. This is where the discipline of learning enters. Students can refine their predictions by incorporating new months of data and asking ADA to re-calculate the trend. Over time, they observe how forecasts change, why they change, and how the system reacts to new pressures. This practice mirrors the continuous improvement cycle I taught in every field: measure, study, adjust, and repeat. Good forecasting is not a single act but a continual process of learning from reality.
Forecasting as a Foundation for Future Insight
Predictive analytics may seem technical, but its heart is simple: use the past to illuminate the future. With AI tools such as ChatGPT ADA, Power BI Copilot, and Google Sheets, students can build forecasts that once required entire statistical departments. More importantly, they learn that prediction guides planning, testing, and improvement. When beginners see their forecasts align with real results, they discover the power of understanding variation and the value of making decisions based on evidence rather than instinct. This is the foundation upon which every system—business, classroom, or community—can grow wiser over time.
Using AI for Competitor & Market Research – Told by W. Edwards Deming
Using AI for competitor and market research is a subject that demands both technical skill and ethical discipline. Market research is not a game of spying or manipulation; it is an effort to understand the conditions under which an organization must operate. When I taught leaders to improve systems, I insisted they study causes, not blame people. In the same way, studying competitors is not about copying them but about learning what drives customer behavior, what trends shape demand, and what opportunities exist for improvement. AI tools now gather and organize market information at remarkable speed, but the principles that guide their use must remain rooted in integrity.

Gathering Customer Reviews as Signals of Need
Customer reviews are a modern treasure of insight. They reveal what people appreciate, what frustrates them, and what unmet needs remain. AI tools such as ChatGPT ADA can analyze thousands of reviews in seconds. A student may prompt: “Summarize customer complaints across these products,” or “Highlight the most common phrases in positive reviews.” The model will return themes—durability, pricing, packaging, or customer support. These themes become clues. They show where value is created and where value is lost. When used well, review analysis helps organizations focus on real needs rather than assumptions.
Examining Competitor Pricing With Clarity
Competitor pricing often confuses beginners. They think matching a price or undercutting it is strategy. It is not. Pricing is a reflection of value, cost, and the expectations of the market. AI can help students compare prices across products, regions, and time periods. Tools connected to Sheets or online sources can gather price lists and allow ADA to explain: “How do these products differ in features?” or “What pricing patterns exist across the market?” The goal is not to imitate a competitor’s price but to understand the reasoning behind it. By learning the logic of the market, students gain insight into positioning and differentiation.
Identifying Demand Trends That Shape the Future
Markets change in cycles, often subtly at first. AI tools can detect shifts long before they become obvious. Students can ask ADA: “Identify trends in this product category,” or “Predict demand for the next six months.” The model may highlight rising interest in specific materials, features, or styles. It might notice seasonal patterns or sudden surges driven by events. These insights help organizations anticipate, rather than react. Forecasting demand is a vital function, for it ensures that quality and resources align with actual customer needs.
Ethical Guidelines for Responsible Market Research
No matter how powerful the tools, ethics must lead the process. I often told executives that trust is the foundation upon which all improvement rests. When using AI for market research, several principles must be upheld. First, gather only information that is publicly available; do not seek private data or exploit vulnerabilities. Second, analyze the data to learn, not to harm competitors or manipulate customers. Third, present findings honestly, without exaggeration or selective reporting. Fourth, respect confidentiality within your own organization—data must inform decisions, not fuel gossip. Without these principles, research becomes misuse, and misuse destroys credibility.
Mini-Project: Evaluating a Product Category
Students can practice these ideas by conducting a simple market analysis. They choose a category—perhaps earbuds, backpacks, or digital planners. Using publicly available reviews and prices, they load data into Sheets. With ADA, they ask for summaries of customer complaints, comparisons of price features, and identification of common trends. They build a short report that answers: What do customers value? What frustrates them? Where is demand shifting? What opportunities exist for improvement? This exercise teaches them how analysis supports thoughtful decisions.
Letting Insights Guide Process, Not Ego
Competitor and market research should never be used to inflate one’s sense of superiority or to justify shortcuts. Instead, insights should guide improvements to systems, products, and services. If a competitor’s customers praise durability, a company must ask how it can improve its own processes. If the market demands lower environmental impact, leaders must rethink how materials are sourced. AI can highlight patterns, but it cannot make the ethical choice to improve. That responsibility belongs to the human being interpreting the results.
Research as a Path to Better Service
At its core, competitor and market research exists for one purpose: to serve customers better. When AI organizes and interprets large volumes of information, students can focus on what matters—the needs of the people they hope to help. Through ethical use of these tools, organizations gain clarity, foresight, and a spirit of continuous learning. In this way, the study of competitors becomes a study of opportunity, and the study of the market becomes a study of responsibility.
Visualizing Data for Business Decisions
Visualizing data for business decisions begins with a simple truth I share with every student: numbers alone rarely persuade anyone. It is only when those numbers take shape—lines rising, bars falling, colors shifting—that people grasp what is really happening. A good visualization does not just display data; it reveals the story behind the data. When I teach students how to use charts, graphs, and heat maps, I want them to understand that these tools exist to create clarity, not complexity. The right visual can turn confusion into comprehension in a single glance.

Choosing the Right Visual for the Right Purpose
When I sit down with a dataset, the first question I ask is not “What can I draw?” but “What needs to be understood?” If the goal is to compare categories, bar charts offer clean separation. If the goal is to see progress over time, a line chart shows the journey. Scatter plots help uncover relationships, while heat maps highlight intensity—sales by region, engagement by hour of day, or performance across teams. Tableau AI and Excel Copilot now offer suggestions that guide students toward the correct visual based on the structure of their data. These tools help them focus on meaning rather than guesswork.
Using Excel Copilot to Shape the Story
Excel Copilot has become one of my favorite teaching tools because it helps young analysts articulate what they want without needing to memorize formulas. A student can write, “Show me a chart of monthly revenue,” and Copilot responds with a clean visualization. They can add, “Highlight the months with the highest sales,” and the chart updates with emphasis where it matters most. Copilot’s suggestions often reveal trends students didn’t notice: sharp increases before holidays, dips during the summer, or periodic spikes tied to promotions. Visualization becomes a conversation with the data.
Exploring Patterns Through Tableau AI
Tableau AI takes visualization further by offering interactive exploration. When students connect their dataset, Tableau proposes visuals that expose meaningful structures—clustered regions on a scatter plot, peaks in time-series data, or geographic variations on a map. Students can filter, hover, drill down, and rearrange elements until they uncover insights that static charts might hide. This process teaches them that visuals are not just end products; they are tools for discovery. By adjusting the view, they learn to ask better questions and look deeper into the patterns.
Mini-Project: Understanding Customer Behavior
One classroom project I often use involves analyzing customer purchase behavior. Students collect anonymized data—purchase date, item bought, price, and payment method. With Excel Copilot, they generate a line chart showing purchase activity over time. In Tableau AI, they create a heat map where rows represent days of the week and columns represent times of day. Colors reveal the busiest shopping periods. Students quickly see when customers tend to buy, which items perform best, and when promotions would be most effective. The visuals tell a story no spreadsheet could communicate on its own.
Mini-Project: Tracking Team Performance
Another project focuses on tracking performance across teams or departments. Students build a dashboard using charts and heat maps to compare productivity, identify bottlenecks, or highlight high-performing groups. Tableau AI might suggest a stacked bar chart to show workload distribution or a scatter plot to compare speed and accuracy. Excel Copilot can format the visuals, label them clearly, and help summarize what the data shows. The students learn that transparency leads to better teamwork and smarter decisions.
Turning Visuals Into Decisions
A visualization earns its value when it prompts action. A declining line invites investigation. A bright red corner of a heat map signals a problem area. A cluster on a scatter plot reveals a pattern worth exploring. When students share their visuals, they are not merely presenting charts—they are presenting insights. And when the people reviewing those visuals make decisions based on them, the students see the true power of clear communication.
Growing Visual Thinkers for the Future
In every lesson I teach, I remind students that visualization is not about decoration. It is a language. A line tells a story of progress. A heat map reveals the heartbeat of activity. A scatter plot uncovers hidden relationships. With Excel Copilot and Tableau AI, students now have tools that help them translate raw data into meaning with unprecedented ease. And once they learn to choose the right visual and interpret its message, they gain a skill that will serve them in every classroom, every business, and every future challenge they face.
Data-Driven Decision Models
Data-driven decision models begin with a lesson I teach in every workshop: decisions become clearer when we make the invisible visible. Most people carry their choices in their heads—preferences, fears, guesses, assumptions. Models like decision trees, SWOT matrices, risk scoring, and prioritization grids take those hidden thoughts and place them where we can see them. When students use AI tools to build or interpret these models, the process becomes even faster and more transparent. These tools allow them to compare options, spot consequences, and choose paths based on evidence rather than instinct.

Seeing Pathways Through Decision Trees
Decision trees are one of the simplest ways to clarify complicated choices. I often introduce them by giving students a scenario—launching a new product, choosing a marketing strategy, or investing in a project. With AI tools like ChatGPT ADA, they can ask, “Build a decision tree showing the possible outcomes.” The model generates branches that represent actions and consequences: If you spend more on marketing, sales may rise; if you reduce spending, risk increases. Students quickly see that each decision opens new pathways. The tree becomes a map of possibilities, showing how today’s choices shape tomorrow’s results.
Understanding Strategy Through SWOT Matrices
SWOT matrices—strengths, weaknesses, opportunities, and threats—help students look at a situation from all angles. When they load information into a prompt and ask AI to create a SWOT matrix, the results are often sharper than what they would produce alone. AI can search for overlooked opportunities, hidden trends, or external threats that students might not consider. A SWOT matrix gives structure to a conversation that might otherwise drift. It reminds students that strategy is not luck but awareness—awareness of what they can control and what they must prepare for.
Measuring Uncertainty With Risk Scoring
Risk scoring teaches students that uncertainty is measurable. Every decision carries some level of possibility and some level of danger. AI is particularly useful here because it can score risks using past data, projections, and known outcomes. Students might ask: “Score the risks associated with expanding into a new market,” or “Rank these operational risks from lowest to highest impact.” The model produces a list with explanations that clarify why certain risks matter more. This is not about eliminating uncertainty; it is about facing it with clarity and allocating resources where they matter most.
Prioritization Grids for What Matters Most
Prioritization grids help students organize their options by importance and effort. When students provide AI with a list of tasks or projects, they can ask for a grid that categorizes items into four quadrants: high priority, low priority, high effort, and low effort. AI evaluates the tasks based on their stated goals and recommends where to focus. This helps students overcome the common trap of working hard without working smart. When they see their tasks mapped visually, they understand where to place energy and what to postpone or delegate.
Mini-Project: Choosing a Business Strategy
To help students practice, I often give them a business scenario—launching a subscription service, opening an online store, or expanding into a new region. They use AI to build a decision tree outlining possible outcomes. Then they generate a SWOT matrix to assess internal and external factors. After that, they request a risk-scoring summary to identify the biggest uncertainties. Finally, they create a prioritization grid to determine which actions should come first. By the end, the students have built a complete decision model that transforms confusion into clarity.
How AI Enhances, Not Replaces, Judgment
AI tools make model-building faster, more accurate, and more comprehensive. But one point I always emphasize is this: AI does not make the decision for you. It illuminates the road, but you choose the direction. The models created through AI help students articulate what they care about, evaluate alternatives, and see the ripple effects of their choices. In the end, they learn that data supports judgment—it does not replace it.
Decision Models as Tools for a Better Future
Every student who learns to build decision models gains a powerful advantage. They learn to reason clearly, question assumptions, and examine choices from multiple angles. Whether they are running a business, organizing a project, or planning their career, these models help them make decisions grounded in truth rather than fear or guesswork. AI simply gives them the ability to move through these models with more confidence and speed. And when decisions become clearer, the future becomes easier to shape with purpose and intention.
Using Kaggle Datasets for Real-World Practice
Using Kaggle datasets for real-world practice is one of my favorite ways to teach students how analysis works outside the classroom. Textbook examples are neat and predictable, but real markets, real customers, and real systems are not. Kaggle offers thousands of datasets—sales histories, marketing performance, environmental changes, financial indicators—and each one gives students an opportunity to work with the same kind of data professionals handle every day. When students learn to explore these datasets using ChatGPT ADA or a spreadsheet, they gain skills that translate directly into real jobs and real decisions.

Choosing a Dataset That Sparks Curiosity
My first step is always helping students choose a dataset they care about. Some gravitate toward sales and marketing, wanting to analyze customer behavior. Others choose environmental data because they want to understand climate patterns. A few pick finance because they enjoy watching how numbers shift in response to global events. On Kaggle, they browse through categories and click on a dataset that interests them. With one download, they now have a file full of rows and columns waiting to be explored. Curiosity becomes the fuel that drives the entire learning process.
Opening the Dataset and Asking the First Questions
Once the dataset is downloaded, we open it in Google Sheets or Excel. The first thing students notice is how imperfect real data can be—missing values, unexpected abbreviations, or columns that need cleanup. But before cleaning begins, I ask them to identify their first questions. Do they want to know which marketing channel performs best? Which region sells the most products? Which environmental factor changes most over time? ADA helps here. Students can upload or paste data into a ChatGPT ADA session and ask: “Give me an overview of the dataset,” or “What key metrics should I examine?” ADA responds with a summary, highlighting patterns worth exploring.
Cleaning the Data to Prepare for Analysis
The next step is to make the dataset usable. Students remove duplicates, standardize date formats, and correct errors using Sheets functions or ADA-generated formulas. Sometimes they ask ADA directly: “Write a formula to clean this column,” or “Convert these timestamps to a consistent format.” After a few adjustments, the data becomes ready for analysis. This stage teaches a crucial lesson: real insights require clean data, and cleaning is part of the analytical craft.
Running Basic Analyses With Spreadsheets or ADA
Now we begin exploring. Students create pivot tables to compare categories, line charts to show trends over time, and bar charts to highlight performance differences. ADA can answer deeper questions: “Identify the top-performing product categories,” or “Find correlations between marketing spending and revenue growth.” These analyses uncover relationships that were invisible at first glance. Students start making observations: certain months perform better, certain regions drive more sales, or certain expenses fluctuate in predictable patterns.
Mini-Project: Sales and Marketing Insights
A popular project involves a Kaggle retail dataset. Students track monthly sales for different product categories. They use Sheets to build a line chart showing how each category rises or falls across the year. Then they ask ADA to summarize key patterns: which category grows fastest, which suffers from seasonality, or which offers the biggest revenue opportunity. Students conclude the project by writing a short “insight brief” explaining what a business should do next. Their recommendations become grounded in data rather than guesswork.
Mini-Project: Environmental Trend Detection
Another project focuses on environmental datasets—air quality, water usage, or weather patterns. Students graph long-term trends in spreadsheets and ask ADA to identify periods of change or unusual peaks. They learn how environmental shifts can be measured and how small anomalies might signal larger patterns. These projects connect analysis to real-world issues, helping students see how data informs public policy and scientific understanding.
Turning Analysis Into Actionable Insight
Once students complete their analysis, I ask them the most important question: “What should someone do with this information?” Data becomes meaningful only when it leads to a decision. Should a company expand its best-selling category? Should a nonprofit adjust its environmental strategy? Should a marketing team reallocate its budget to more effective channels? Students learn that analysis is not about generating charts—it is about generating direction.
Real-World Confidence Through Real Data
Every student who analyzes a Kaggle dataset discovers something powerful: they are capable of handling messy, complex, real-world data. ADA and spreadsheets help them clean, explore, and interpret information at a level that once required specialized training. When they finish, they do not just know how to run analyses—they know how to think like analysts. And that confidence becomes the foundation of every future project, business idea, or career they pursue.
AI for A/B Testing & Optimization
AI for A/B testing and optimization begins with one of the most valuable lessons in business: you don’t have to guess what works—you can test it. When students learn that companies compare two versions of something, whether an advertisement, an email subject line, or even a product price, they begin to understand how experimentation guides growth. Instead of assuming which version will perform better, businesses run A/B tests to find out. And with AI analyzing the results, those answers come faster, clearer, and with deeper insight than traditional methods.

Setting Up the Two Options to Test
Every A/B test starts with a simple split between two options. Version A might be a bold headline; version B might be a softer, more personal one. A company could test two different prices for the same product or two colors for a call-to-action button. I teach students to approach these tests like controlled experiments: change one element at a time and keep the rest constant. When they use AI tools, such as ChatGPT ADA or marketing platforms with built-in AI features, they can design these variations with professional clarity. The AI can even suggest which elements would be most impactful to test based on industry data.
Collecting and Organizing Data From the Test
Once the test starts, real customers interact with the two versions. Some click on A, some on B. Some open one email but ignore the other. Some respond more strongly to one style of wording. The challenge is not running the test—it is making sense of the results. This is where AI shines. Students can gather the results in Google Sheets, Excel, or a marketing dashboard and ask ADA: “Which version performed better and why?” The AI evaluates click-through rates, conversion rates, revenue generated, and even deeper patterns based on time of day or device used. Suddenly the data stops being noise and becomes a clear narrative.
Understanding Why One Version Wins
A/B testing isn’t just about knowing which version won—it’s about understanding why. That’s where optimization begins. AI helps students dig into the reasoning. ADA might explain that Version A performed better because its headline created urgency or because its button color stood out more. It might point out that younger users preferred Version B while older customers responded better to Version A. These insights help students learn that decisions aren’t made in the dark; they emerge from patterns revealed through testing and careful analysis.
Mini-Project: Testing Email Subject Lines
One of the simplest and most eye-opening projects is testing two email subject lines. Students create two versions of an email for a fictional business: one playful and informal, the other professional and direct. They assign each subject line to half of their mailing list—real or simulated—and track open rates. With ADA, they load the results and ask for a summary: “Which performed better and what does this tell us about our audience?” The explanation helps them see the connection between messaging, emotion, and behavior.
Mini-Project: Optimizing a Landing Page
Another project involves testing two versions of a landing page layout. Students change the position of the headline or the color of the call-to-action button. Using AI-enhanced analytics tools, they gather data on where users clicked, how long they stayed, and whether they continued to the next step. ADA helps them compare engagement patterns and recommend the version to keep. They learn firsthand how even small design choices can dramatically shape user behavior.
Iterating Based on Results
Once a winner is chosen, the process doesn’t stop. Optimization is continuous. Students learn to ask: “What should we test next?” AI can help generate new variations based on the results. If Version A’s headline performed better, the next test might explore tone, length, or even the emotional trigger of the message. The cycle becomes a pattern: test, analyze, learn, improve. Over time, a business grows stronger not through lucky guesses but through disciplined iteration.
Turning Experiments Into Confident Decisions
A/B testing teaches students a crucial lesson: the best decisions emerge from observation, not assumption. AI tools accelerate this lesson by removing the friction of analysis. Instead of spending hours crunching numbers or building charts, students can focus on interpretation and strategy. They see how testing two simple options can reveal powerful truths about customers and how small improvements compound into major gains over time. When they learn to combine experimentation with AI, they gain the confidence to make decisions rooted in evidence—and the ability to refine those decisions again and again as new data arrives.
Ethics, Bias, and Responsible Data Use – Told by Zack Edwards
Ethics, bias, and responsible data use is a subject that sits at the heart of every conversation I have with students about AI. Tools like ChatGPT ADA, Excel Copilot, and Google Sheets can process data faster than any human, but they cannot make moral decisions for us. That responsibility rests on the person using them. When students work with data—whether for a classroom project, a business strategy, or a research assignment—they must understand that their choices affect real people. Ethical awareness becomes the foundation for every meaningful piece of analysis.

Protecting Data Privacy and Personal Information
Data privacy begins with respecting the individuals behind the numbers. When students download datasets or gather their own information, I remind them that each data point could represent a person with expectations and rights. Ethical data practice means avoiding unnecessary collection, securing sensitive information, and sharing only what is needed. AI tools can help anonymize or summarize data, but they cannot override reckless handling. When students ask ADA to “remove identifying details” or “aggregate sensitive records,” the model can guide the process, but the responsibility remains human.
Recognizing Bias in the Data We Use
Bias in datasets is one of the most important—and most challenging—topics to teach. Many datasets carry historical, cultural, or systemic biases, and AI models may unintentionally amplify them. When students run analyses, they must ask critical questions: Who collected this data? What groups might be underrepresented? What assumptions shaped the categories? ADA can explain patterns, but it cannot judge fairness. Students must look closely at outliers, missing groups, or skewed distributions and ask whether the dataset truly reflects reality. This habit builds ethical awareness and guards against drawing conclusions that harm or exclude others.
Avoiding the Trap of Correlation vs. Causation
One of the most common mistakes students make is assuming that if two numbers move together, one must be causing the other. This is where ethics plays a hidden but essential role. Misinterpreting correlation as causation can lead to bad decisions and unfair judgments. I teach students to look for outside factors, contextual clues, or possible coincidences before assigning blame or credit. When students ask ADA, “Does this relationship indicate causation?” the model can explain the limitations and suggest methods like controlled tests or additional variables. Ethical thinking requires humility: acknowledging what the data cannot prove.
Making Fair and Responsible Decisions With AI
The final step is learning to turn analysis into ethical action. When AI highlights patterns—whether predicting performance, identifying risk, or scoring behavior—students must decide how to use that information fairly. This means asking questions such as: Will this decision disadvantage a specific group? Does the model rely on incomplete or biased data? Am I interpreting the results responsibly? AI can provide insight, but fairness requires reflection. A responsible analyst tests the model, challenges assumptions, and considers the human impact before implementing recommendations.
Mini-Project: Evaluating Bias in a Dataset
To practice these principles, I often have students choose a dataset with demographic information. They explore whether certain groups are misrepresented or whether outcomes differ significantly across categories. ADA helps them identify patterns, but the students must interpret whether these patterns are fair or reflective of deeper issues. They write a brief report explaining potential biases and suggesting how the dataset could be improved or contextualized. This exercise reminds them that good analysis requires both technical skill and moral clarity.
Mini-Project: Reviewing an AI Recommendation
Another project involves asking ADA to make a prediction—such as identifying the most successful marketing channel or forecasting customer behavior. Students then critique the recommendation. They evaluate whether the model used complete information, whether any biases might have influenced the outcome, and whether the decision would be fair to all stakeholders. By questioning the model, students learn the most important skill in responsible AI use: discernment.
Vocabular to Learn While Learning About AI and Data Analysis
1. Dataset
Definition: A collection of related data organized in tables, files, or spreadsheets.
Sentence: Before running the analysis, the class downloaded a large dataset from Kaggle to practice with.
2. Data Cleaning
Definition: The process of fixing or removing incorrect, incomplete, or duplicated data.
Sentence: The students spent the first lesson on data cleaning to make sure their results would be accurate.
3. Correlation
Definition: A relationship where two things change together, though one may not cause the other.
Sentence: There was a strong correlation between study hours and higher test scores.
4. Outlier
Definition: A data point that is very different from the others.
Sentence: One extremely high score was an outlier that affected the class average.
5. Decision Tree
Definition: A diagram that shows choices and their possible outcomes.
Sentence: The team used a decision tree to decide whether to increase the price or keep it the same.
6. Dashboard
Definition: A screen showing key metrics and charts all in one place for quick review.
Sentence: Tableau created a dashboard that displayed daily sales, customer visits, and top products.
7. Model (in AI)
Definition: A computer system trained to recognize patterns and make predictions.
Sentence: Their AI model predicted which customers were likely to return next month.
8. Standardization
Definition: Making data consistent in format or measurement so it can be compared accurately.
Sentence: They used standardization to convert all dates in the spreadsheet into the same format.
Activities to Demonstrate While Learning About AI and Data Analysis
Build Your Own AI Dashboard – Recommended: Intermediate to Advanced Students
Activity Description: Students use AI tools to turn raw data into an interactive dashboard that communicates key business or science insights.
Objective: Teach students how visualization helps decision-making and how AI assists in identifying important trends.
Materials:• Google Sheets• Tableau AI or Power BI Copilot• Sample dataset (sales, weather, or social media data)• Laptop/Chromebook
Instructions:
Have students choose a dataset from Kaggle or use a teacher-provided file.
Import the dataset into Google Sheets and ask ChatGPT ADA: “Summarize key trends in this dataset.”
Students select 3–5 metrics to visualize.
Use Tableau AI or Power BI Copilot to generate charts, graphs, and a simple dashboard.
Students write a short summary explaining what decisions the dashboard could help someone make.
Learning Outcome: Students learn how dashboards turn complex data into visual stories that support better decisions. They also practice real-world visualization tools used by professionals.
A/B Testing Simulation – Recommended: Intermediate to Advanced Students
Activity Description: Students create two different versions of something—an email, poster, or webpage—and use AI to interpret which performs better and why.
Objective: Teach experimentation, data comparison, and optimization using AI-supported analysis.
Materials:• Two versions of a poster or message• Google Forms or a simple voting link• ChatGPT ADA• Spreadsheet software
Instructions:
Divide the class into groups and have each group create two versions (A and B) of an item.
Share both versions with classmates or families and collect responses through Google Forms.
Export the results into Sheets.
Ask ADA: “Which version performed better and explain why.”
Students discuss the results and brainstorm improvements.
Learning Outcome: Students understand testing methodology, learn how AI evaluates performance, and recognize how small design changes affect behavior.
Detecting Trends With Real-World Data – Recommended: Intermediate to Advanced Students
Activity Description: Students explore a real dataset—like weekly temperature, cafeteria menu popularity, or school attendance—and use AI to find patterns and make predictions.
Objective: Help students identify trends, seasonality, and cause-and-effect misunderstandings.
Materials:• Google Sheets• Simple dataset (teacher-provided or student-collected)• ChatGPT ADA
Instructions:
Have students gather or load weekly or monthly data into Sheets.
Ask ADA: “Explain the trend in this dataset” or “Predict the next three weeks of data.”
Students create a line chart that visualizes the trend.
Discuss whether the predictions make sense and what could cause unexpected results.
Learning Outcome: Students learn how trends are detected and how forecasting works. They also begin distinguishing between correlation and causation.
Data Cleaning Challenge – Recommended: Intermediate to Advanced Students
Activity Description: Students receive a messy dataset and work to identify errors, duplicates, and inconsistencies using AI tools.
Objective: Teach why data cleaning is necessary and how incorrect data leads to incorrect conclusions.
Materials:• “Messy” dataset prepared by the teacher (typos, duplicates, missing values)• Google Sheets• ChatGPT ADA
Instructions:
Give students the messy dataset.
Ask them to identify errors manually for 5 minutes.
Then allow them to ask ADA: “What errors do you see in this dataset?”
Using Sheets tools, students fix errors using formulas ADA suggests (e.g., cleanup formatting, remove duplicates).
Ask: “How would wrong numbers have affected a real decision?”
Learning Outcome: Students see firsthand why good decisions depend on clean data and how AI speeds up the tedious parts of analysis.




Comments