Using Python in Excel: What It Actually Is, How It Works, and What You Can Do With It

In August 2023, Microsoft announced something that made a lot of data people do a double-take: Python was coming to Excel. Not through a third-party add-in, not through some clunky workaround involving copy-pasting between applications, but natively -- typed directly into a cell using a new =PY() function, executed in the cloud, and returned right back into the spreadsheet grid.

The feature entered public preview immediately, became generally available on Excel for Windows in September 2024, and rolled out to Excel for the web in early 2025. Mac support followed, with enterprise and business users gaining access starting with Excel for Mac Version 16.96. By mid-2025, Microsoft had broadened eligibility to include Office 365 E1, Microsoft 365 Business Basic, and Office 365 F3 licenses -- plans that only have access to web apps rather than desktop apps. Family and Personal subscribers can access the feature in preview on Windows, Mac, and web. Education subscribers have access through the Microsoft 365 Insider Program.

This is a genuine shift in how Python and Excel relate to each other. For decades, bridging the two required external libraries, local Python installations, and a willingness to leave the spreadsheet environment. Now, the bridge is built into the product itself.

This article explains how Python in Excel actually works under the hood, who designed the architecture, what you can and cannot do with it, how the security model and licensing actually function, what the real-world performance constraints look like, and where the relevant Python Enhancement Proposals (PEPs) connect to the architecture. It also covers something that other guides skip: the cognitive questions you should be asking before you decide whether this tool fits your workflow -- and what to use instead when it does not. Real code, real understanding, no fluff.

How the Integration Actually Works

The first thing to understand is that Python in Excel does not run Python on your computer. When you type =PY( into a cell, your code is sent to a secure container running on Microsoft Azure. That container uses the Anaconda Distribution for Python -- a curated set of packages maintained by Anaconda, Inc. -- and executes your code in an isolated sandbox. The results are then sent back to your worksheet.

When the feature was first announced, Stefan Kinnestrand, General Manager at Microsoft, described the core vision in a blog post on the Microsoft Tech Community: users should be able to perform advanced data analysis directly in Excel's familiar environment, accessing Python right from the ribbon, with no setup or installation required (source: Microsoft Tech Community, August 2023). That design philosophy -- you should not have to leave Excel, install anything, or configure anything -- has held steady through every subsequent update.

The partnership with Anaconda is central to the implementation. Anaconda provides the curated Python distribution that runs in the cloud environment, including popular libraries like pandas, Matplotlib, seaborn, scikit-learn, statsmodels, and SciPy. In their September 2024 general availability announcement, Anaconda emphasized that this integration lets users run Python code securely and directly within Excel's grid without a separate Python installation (source: Anaconda Press Release, September 2024).

Note

The execution model works differently from standard Excel formulas. Python cells are evaluated in order -- row by row, left to right -- more like cells in a Jupyter notebook than cells in a traditional spreadsheet that recalculates based on dependency graphs. This linear execution order has real implications for how you structure your workbook.

Who Built It: Guido van Rossum and the Team Behind It

This is a detail that many guides overlook, and it matters for understanding the quality of the integration. Guido van Rossum -- the creator of Python, its emeritus Benevolent Dictator For Life, and a Microsoft Distinguished Engineer since November 2020 -- was at Microsoft when Python in Excel was developed and has been a vocal advocate for the integration.

Van Rossum came out of retirement to join Microsoft, originally exploring a range of projects across machine learning, Azure, and notebooks before focusing on improving CPython performance. When Python in Excel was announced, van Rossum expressed enthusiasm, describing the integration as excellent and tightly built, and noting he had not imagined it would be possible when he first joined the company (source: Microsoft Tech Community, August 2023). The project involved collaboration across multiple Microsoft teams: Excel, Developer Division, Security, Azure, and Microsoft Research.

Having Python's creator at the same company -- and publicly endorsing the integration -- signals that the feature was built with an understanding of how Python's type system, memory model, and data structures should interact with a spreadsheet grid. It also explains why the feature feels like Python -- the semantics are not simplified or watered down. You get real pandas DataFrames, real NumPy arrays, and real matplotlib figures.

Peter Wang, Anaconda's CEO and co-founder, described the integration as a major step that would reshape how millions of Excel users work, calling it a significant milestone for Python adoption (source: The Register, August 2023). The partnership ensures that the cloud environment runs a curated, source-built distribution rather than a generic Python installation, which matters for reproducibility and security.

The xl() Function: Connecting Excel Data to Python

The mechanism for getting your Excel data into Python is the xl() function. This is a special function available only inside =PY() cells that reads data from your workbook and converts it into a pandas DataFrame.

=PY(
df = xl("A1:D100", headers=True)
df
)

That single line reads the range A1:D100 from your worksheet, treats the first row as column headers, and creates a pandas DataFrame. The DataFrame is then returned to the cell. If the cell's output mode is set to "Excel Value," the DataFrame spills across multiple cells as a table. If set to "Python Object," it displays as an object reference that other Python cells can use.

You can also reference named ranges and Excel tables:

=PY(
df = xl("SalesTable[#All]", headers=True)
df
)

This reads an entire Excel Table called "SalesTable" into a DataFrame. The [#All] specifier includes headers, which is important for getting properly labeled columns.

Here is the cognitive shift that matters: the xl() function is the only bridge between the spreadsheet world and the Python world. Every piece of data your Python code processes has to pass through this function. You cannot read from cells using standard Python syntax like referencing cell addresses directly. This constraint forces you to think about your data architecture more deliberately -- you need to decide upfront what data Python needs to see, and structure your Excel tables accordingly.

Data also flows through Power Query. If you use Excel's built-in connectors to pull data from SQL Server, REST APIs, SharePoint, or other sources, that data lands in your workbook and becomes available to Python via xl(). This means Python in Excel is not limited to data you manually enter -- it sits at the end of whatever data pipeline you have already built in Power Query, giving Python access to the same external data sources without needing its own network access.

Pro Tip

Because pandas is automatically pre-imported in the Python in Excel environment, you can immediately start using DataFrame operations without any import statements for basic work. You only need to explicitly import libraries like scipy, sklearn, matplotlib, and seaborn.

The Python Editor: Writing Code Beyond the Formula Bar

The Formula Bar works for short snippets, but writing real analysis code in a single-line input field is painful. Microsoft addressed this with the Python Editor -- a dedicated editing pane that provides a more IDE-like experience for writing Python code within Excel.

The Python Editor leverages Visual Studio Code technology and provides features you would expect from a code editor: syntax highlighting, autocompletion, inline documentation, parameter hints, and multi-line editing. This is what makes writing complex Python in Excel practical rather than merely possible. You can write a 30-line data cleaning pipeline and actually read it, rather than squinting at a formula bar that shows four words at a time.

In late 2025, Microsoft introduced an editable Python initialization script for Insiders on Windows. This feature lets you customize the default Python startup code for a workbook -- importing libraries, defining helper functions, or setting configuration variables that should be available to all Python cells. Think of it as a __init__.py for your workbook. If you find yourself writing the same import seaborn as sns line in every cell, the initialization script eliminates that repetition. The initialization editor lets you view and edit the startup code, add library imports and functions, or reset it to the default state. If your initialization code contains errors, affected cells display #PYTHON! until you resolve the issues (source: Neowin, October 2025).

What You Can Actually Do: Practical Examples

Data Cleaning and Transformation

One of the strongest use cases is doing data cleaning that would be painfully complex with Excel formulas alone.

=PY(
df = xl("A1:F500", headers=True)

# Standardize text columns
df["customer_name"] = df["customer_name"].str.strip().str.title()

# Fix inconsistent date formats
df["order_date"] = pd.to_datetime(df["order_date"], format="mixed")

# Fill missing values in discount column with zero
df["discount"] = df["discount"].fillna(0)

# Calculate a derived column
df["net_revenue"] = df["revenue"] - (df["revenue"] * df["discount"] / 100)

df
)

In pure Excel, the equivalent of that .str.strip().str.title() chain would require a nested TRIM(PROPER(...)) formula copied down 500 rows. The date parsing would be a nightmare of DATEVALUE and TEXT combinations. And filling missing values conditionally would need IF(ISBLANK(...)) logic in every cell. With pandas, it is five lines.

But there is a deeper point here about cognitive architecture. When you clean data with formulas, you are thinking cell by cell -- each formula is a local transformation of one value. When you clean data with pandas, you are thinking in columns -- each operation transforms an entire series at once. This is not just a convenience difference. It changes how you reason about your data. Column-level thinking naturally leads you to ask questions like "what is the distribution of this field?" and "are there systematic patterns in the missing values?" -- questions that cell-level thinking rarely surfaces.

Statistical Analysis Beyond Excel's Built-Ins

Excel has basic statistical functions, but Python opens the door to analyses that simply are not available natively.

=PY(
from scipy import stats

df = xl("SalesData[#All]", headers=True)

# Run a correlation test between advertising spend and revenue
corr, p_value = stats.pearsonr(df["ad_spend"], df["revenue"])

f"Correlation: {corr:.4f}, p-value: {p_value:.6f}"
)

This runs a Pearson correlation test and returns both the correlation coefficient and the p-value. Excel can give you CORREL(), but it will not give you the statistical significance. The same pattern extends to t-tests, ANOVA, regression analysis, and more.

=PY(
from scipy import stats

df = xl("TestScores[#All]", headers=True)

# Independent samples t-test: do Group A and Group B differ significantly?
group_a = df[df["group"] == "A"]["score"]
group_b = df[df["group"] == "B"]["score"]

t_stat, p_value = stats.ttest_ind(group_a, group_b)

f"t-statistic: {t_stat:.4f}, p-value: {p_value:.6f}"
)

Consider what is really happening here. You are not just running a statistical test -- you are embedding the statistical reasoning directly next to the data it references, in a format that a colleague can open and immediately understand. A Jupyter notebook can do the same analysis, but it lives in a separate file, disconnected from the data source. Python in Excel removes that gap.

Visualization With Matplotlib and Seaborn

Python in Excel can render plots directly into the worksheet. This gives you access to Python's entire visualization ecosystem without leaving the spreadsheet.

=PY(
import matplotlib.pyplot as plt
import seaborn as sns

df = xl("SalesData[#All]", headers=True)

fig, ax = plt.subplots(figsize=(8, 5))
sns.boxplot(data=df, x="region", y="revenue", ax=ax)
ax.set_title("Revenue Distribution by Region")
plt.tight_layout()
fig
)

When you return a matplotlib figure object as the last expression in a =PY() cell, Excel renders it as an image in the worksheet. Set the cell output to "Python Object" to display the chart. This gives you access to box plots, heatmaps, violin plots, scatter matrices, and every other visualization that seaborn and matplotlib support -- visualizations that Excel's built-in chart engine simply cannot produce.

Machine Learning in a Spreadsheet

You can train a machine learning model inside Excel. Whether you should for production work is a separate question, but for exploration and prototyping, it is remarkably useful.

=PY(
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
import numpy as np

df = xl("HousingData[#All]", headers=True)

X = df[["sqft", "bedrooms", "bathrooms"]]
y = df["price"]

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

model = LinearRegression()
model.fit(X_train, y_train)

score = model.score(X_test, y_test)
f"R-squared: {score:.4f}"
)

You just trained a linear regression model on housing data inside Excel. The R-squared value appears in the cell. You could extend this to output predictions as a DataFrame that spills across the worksheet, giving you a column of predicted prices right next to the actual data.

The question worth asking is not "can I do machine learning in Excel?" -- clearly you can. The better question is: "does my audience understand what an R-squared value means, and will they make decisions based on it?" If the answer is yes and your dataset fits in a workbook, this workflow removes an enormous amount of toolchain friction. If the answer is no, the model output is decorative at best and misleading at worst.

The Security Model: What the Sandbox Allows and Blocks

Understanding the security architecture is important because it directly determines what you can and cannot do. Python code runs in hypervisor-isolated containers built on Azure Container Instances. According to Microsoft's data security documentation, the containers have no access to your local file system, no network access, no access to your computer or devices or user account, and no ability to install additional packages beyond what Anaconda provides (source: Microsoft Support). The workbook is the entire universe as far as Python is concerned.

In a Reddit AMA, the Microsoft Excel engineering team explained their rationale for this cloud-first design. They treat all Python code in a workbook as untrusted and execute it in containers that have no outbound network access. Running Python securely on a local machine is a fundamentally difficult problem, and the cloud sandbox approach allows Microsoft to enforce isolation guarantees that would be nearly impossible to maintain on user hardware with arbitrary local configurations.

What the Sandbox Blocks

You cannot use requests or urllib to fetch data from an API. You cannot read files from your hard drive with open() or pd.read_csv("C:\\data\\file.csv"). You cannot install packages with pip install. You cannot run system commands. You cannot execute VBA macros or interact with the Excel object model programmatically.

The data flow is strictly controlled: your workbook data goes into the Azure container via xl(), Python processes it, and results come back through =PY(). Microsoft's security documentation states that data is not persisted in the cloud -- containers stay online only while the workbook is open or until a timeout occurs. No workbook data is used for AI training.

For enterprise users, the feature is GDPR compliant and adheres to the European Union Data Boundary (EUDB). Customers in the EU have their containers run in the EU. Multinational tenants can configure all containers to run in Europe through the Office Configuration Service (source: Microsoft Support). Python in Excel also follows Excel's existing security policies: workbooks from the internet open in Protected View, which blocks Python formula execution until the user explicitly enables it.

There is a subtlety here that security-conscious users should think about. The sandbox prevents your Python code from reaching the outside world, but it does not prevent the outside world from reaching you. If someone sends you a workbook with malicious Python code and you click "Enable Editing," that code will execute in the cloud container and has read access to all data in the workbook via xl(). It cannot exfiltrate that data anywhere -- but the execution still happens. Microsoft mitigates this with Protected View and optional registry-based security warnings that administrators can configure.

Licensing and Compute Tiers: What You Actually Pay For

The licensing model is one of the areas where confusion is common, so it is worth being precise.

Standard compute is included with qualifying Microsoft 365 subscriptions at no additional cost. This covers Enterprise and Business licenses (like Office 365 E3 and E5), and as of mid-2025, has expanded to include E1, Business Basic, and F3 plans. Family and Personal subscribers can access the feature in preview. Standard compute means your Python formulas run, but calculation times are slower and the recalculation mode is limited to automatic.

Premium compute is an optional paid add-on. It provides faster Python calculation times and access to additional calculation mode options, including manual and partial recalculation modes. With the add-on, you can control when and how often your workbooks recalculate Python formulas -- which matters when you have dozens of Python cells and do not want every keystroke to trigger a cloud round-trip. A limited amount of premium compute is included with each user's Microsoft 365 subscription each month. After that allowance is exhausted, users fall back to standard compute or can purchase the Python in Excel add-on license through the Microsoft 365 admin center or via in-app prompts. Note that the premium compute preview has ended for Enterprise and Business subscriptions -- you can no longer opt into the preview for additional premium compute with those plans. To get more premium compute after reaching your monthly limit, your admin must purchase the add-on license (source: Microsoft Support).

This tiered model has implications for how you structure workbooks. With standard compute, a workbook containing 50 Python cells that all recalculate on every change will feel sluggish. The practical response is to architect your workbooks so that expensive computations are isolated in cells that only need to run occasionally, and lightweight data retrieval happens in the cells that change frequently. Premium compute alleviates this, but the design principle remains sound regardless of your tier.

Pro Tip

If you are building workbooks for a team, check whether your organization has disabled self-service purchasing. The Python in Excel add-on supports self-service purchase, meaning individual users can buy it through in-app prompts -- unless an admin has restricted that capability. If self-service is disabled, users can submit license requests to their admin instead.

The PEPs Behind the Scenes

Several Python Enhancement Proposals are relevant to how Python in Excel operates, even though the integration is a Microsoft product rather than a CPython feature.

PEP 484 -- Type Hints. Pandas, NumPy, scikit-learn, and the other libraries available in the Python in Excel environment all use type hints extensively. When you write code in the Excel Formula Bar or the Python Editor, the IDE-like features -- autocompletion, inline documentation, parameter hints -- are powered by type information defined through PEP 484. This is what makes the experience feel like writing Python in a real editor rather than typing into a dumb text box.

PEP 3118 -- Revising the Buffer Protocol. The reason pandas can efficiently wrap NumPy arrays, and the reason those arrays can be serialized and sent to and from the Azure container without excessive overhead, traces back to PEP 3118. The buffer protocol allows Python objects to share memory representations in a standardized way. When your DataFrame travels from the cloud container back to Excel, the underlying data structures use this protocol for efficient memory management.

PEP 249 -- Python Database API Specification v2.0. While you cannot make direct database connections from within the Python in Excel sandbox, the data that arrives via Power Query often originates from databases accessed through PEP 249-compliant drivers. If you use Power Query to pull data from SQL Server, PostgreSQL, or another relational database into your workbook, and then reference that data with xl(), the upstream pipeline relied on PEP 249's standardized interface. Understanding this helps you appreciate the full data pipeline even when the Python in Excel sandbox only sees the final workbook data.

PEP 20 -- The Zen of Python. This one is philosophical rather than technical, but directly relevant. One of its central tenets is that there should be one obvious way to accomplish a given task. Python in Excel, for all its constraints, embodies this principle for a specific audience: the Excel analyst who needs Python's analytical power but does not want to maintain a local Python installation, manage virtual environments, or switch between applications. The =PY() function is the one obvious way to bring Python into the spreadsheet.

What Python in Excel Is Not

It is worth being explicit about the boundaries. Python in Excel is not a replacement for Jupyter notebooks, VS Code, or a proper Python development environment. The Formula Bar and even the Python Editor are not full IDEs. You cannot debug with breakpoints. You cannot run unit tests. You cannot version control your Python code in any meaningful way -- it lives inside Excel cells, not in .py files.

It is also not a replacement for VBA when it comes to Excel automation. VBA can respond to worksheet events, manipulate the Excel object model, create custom dialog boxes, and automate repetitive UI tasks. Python in Excel does none of those things. The two technologies solve different problems.

And it is not a local execution environment. This is the constraint that generates the strongest reactions. There is no option to run Python locally on your machine within Excel. The Microsoft Excel engineering team has acknowledged community requests for local execution but has explained that the cloud-first approach was chosen for three reasons: security isolation is fundamentally easier to guarantee in a controlled cloud environment, managing local Python environments across millions of user machines would be a support nightmare, and cloud execution ensures the feature works consistently regardless of the user's local setup.

Watch Your Cell Order

The linear execution order (row by row, left to right) means you need to think carefully about where you place Python cells. If cell B2 depends on a DataFrame created in cell A5, it will fail because B2 calculates before A5. This trips up newcomers regularly and is a fundamentally different mental model from Excel's dependency-based recalculation.

The Offline Problem and Latency Reality

Because Python code executes in Azure, there is an inherent dependency on network connectivity. If you are offline -- on a plane, in a building with poor connectivity, or during a cloud service outage -- your Python cells will not recalculate. The workbook itself is still accessible, and you can view the last-calculated results, but you cannot update them. Standard Excel formulas in the same workbook will continue to work normally.

This creates a practical problem for people who present from their laptops in conference rooms with unreliable Wi-Fi. If your workbook depends on Python cells for key outputs, and the network goes down during a presentation, those outputs will show stale data or errors. The mitigation is straightforward but requires planning: set Python cell outputs to "Excel Value" for any results you need to persist, so the spilled values remain visible even if the Python cells cannot recalculate.

On latency: every Python cell execution involves a network round-trip to Azure. For a single cell with a simple calculation, this adds a few seconds. For a workbook with many Python cells, the cumulative latency can be significant. Standard compute is slower than premium compute, and the difference becomes noticeable with complex operations. In practice, you should expect a data cleaning pipeline that takes 200 milliseconds locally in a Jupyter notebook to take several seconds in Python in Excel, because the bottleneck is network overhead, not computation.

The premium compute add-on reduces calculation time but does not eliminate the network latency. If you are working with time-sensitive data and need instant recalculation, Python in Excel is not the right tool -- a local Python environment will always be faster for compute-intensive work.

When It Makes Sense and When It Does Not

Python in Excel is a strong fit when you are an analyst whose primary tool is Excel and you need capabilities that go beyond what Excel offers natively. Statistical testing, text processing with regex, machine learning prototyping, data cleaning operations that would require dozens of helper columns -- these are the sweet spots.

It is less of a fit when you are a developer who already has a Python environment set up and needs full control over packages, file access, and network requests. In that case, the alternatives section below will serve you better.

The ideal user is someone who thinks in spreadsheets but has outgrown what spreadsheets can do alone. Someone who does not want to become a full-time Python developer but who recognizes that VLOOKUP has limits and that a scatter matrix cannot be built with conditional formatting.

Here is a decision framework worth internalizing: if the person who will consume your output expects an Excel file, Python in Excel is worth considering. If they expect a dashboard, consider Power BI. If they expect a report, consider Jupyter with nbconvert. If they expect a deployed model, you need a proper ML pipeline. The tool should match the consumer, not just the creator.

Alternatives Worth Knowing About

Understanding what Python in Excel is not becomes sharper when you understand what else exists and when each alternative is the better choice.

openpyxl is a pure Python library for reading and writing .xlsx files programmatically. If you need to generate hundreds of Excel reports from a template, or parse data from uploaded spreadsheets in a web application, openpyxl is the right tool. It is free and open source. Python in Excel cannot generate workbooks programmatically -- it operates inside a single workbook at a time.

xlwings goes further: it can automate Excel from external Python scripts, call Python functions from VBA, and create Excel add-ins. It runs locally, which means it has full access to your file system, network, and any Python packages you have installed. For workflows that need automation of the Excel application itself -- opening files, formatting cells, sending emails based on spreadsheet data -- xlwings is what Python in Excel cannot be. The core library is BSD-licensed and free to use; advanced features like embedded code, template-based reporting, and one-click installers are available through xlwings PRO, which requires a paid plan for commercial use.

PyXLL is a commercial product focused specifically on creating Excel add-ins. Where xlwings emphasizes scripting and automation from outside Excel, PyXLL embeds Python inside Excel so that Python functions appear as native worksheet functions, ribbon buttons, and context menus -- indistinguishable from built-in features to the end user. It runs locally, supports any Python package, and is particularly suited for financial modeling teams and quantitative analysts who need high-performance, custom Python-powered functions that behave exactly like native Excel functions.

Jupyter notebooks remain the gold standard for exploratory data analysis when you need full Python capability. If you need network access, arbitrary package installation, breakpoint debugging, version control, or collaborative editing with JupyterHub, a notebook is the right environment. The tradeoff is that your audience needs to understand notebooks -- and for many business stakeholders, an Excel file is a more accessible deliverable than an .ipynb file.

Google Sheets with Apps Script offers a different kind of integration: JavaScript-based scripting with built-in access to Google's APIs. It does not support Python natively, but for teams already in Google Workspace, it provides analogous extensibility within a spreadsheet environment.

Putting It Together: A Complete Workflow

Here is what a realistic workflow looks like. You have sales data in an Excel table. You want to clean it, analyze it, visualize it, and present findings -- all without leaving the workbook.

Cell A1 -- Load and clean the data:

=PY(
df = xl("RawSales[#All]", headers=True)
df["date"] = pd.to_datetime(df["date"], format="mixed")
df["product"] = df["product"].str.strip().str.lower()
df["revenue"] = pd.to_numeric(df["revenue"], errors="coerce")
df = df.dropna(subset=["revenue"])
df
)

Cell A3 -- Generate summary statistics by product:

=PY(
summary = df.groupby("product").agg(
    total_revenue=("revenue", "sum"),
    avg_order=("revenue", "mean"),
    order_count=("revenue", "count"),
).sort_values("total_revenue", ascending=False)
summary
)

Cell D1 -- Create a visualization:

=PY(
import matplotlib.pyplot as plt
import seaborn as sns

fig, axes = plt.subplots(1, 2, figsize=(12, 5))

# Revenue by product
summary.plot(kind="barh", y="total_revenue", ax=axes[0], legend=False)
axes[0].set_title("Total Revenue by Product")
axes[0].set_xlabel("Revenue")

# Revenue distribution
sns.histplot(df["revenue"], bins=30, ax=axes[1], kde=True)
axes[1].set_title("Revenue Distribution")

plt.tight_layout()
fig
)

Notice that cell A3 references df, which was created in cell A1. Because A1 is above and to the left of A3, the execution order guarantees that df exists when A3 runs. Similarly, D1 references summary from A3 and df from A1, both of which have already been calculated by the time D1 executes.

This is one workbook. The data, the analysis, the visualization, and the underlying Python code all live together. You can share the workbook with a colleague, and as long as they have a qualifying Microsoft 365 subscription, they can recalculate the Python cells and see the same results. That level of integration did not exist before.

Where This Is Heading

The trajectory of Python in Excel is inseparable from Microsoft's broader AI strategy. In a podcast interview with Bill Gurley and Brad Gerstner, Satya Nadella compared the Python-Excel combination to the synergy between GitHub and Copilot, describing it as an example of how pairing analytical power with a familiar interface creates something greater than either tool alone (source: CX Today, reporting on BG2 podcast). That characterization points toward where things are going: Python as the computational engine, Excel as the interface, and AI as the glue that writes the code for you.

The Copilot in Excel with Python feature allows you to describe an analysis in natural language, and Copilot generates the Python code for you. It entered preview alongside the general availability announcement in September 2024 and has continued to evolve. In January 2026, Microsoft made Agent Mode in Excel generally available on desktop -- a multi-step AI assistant built on reasoning models from both OpenAI and Anthropic that can plan analyses, generate formulas and Python code, create charts, and iterate on results until they are verified. Agent Mode supports model selection, letting users choose between providers, and represents a significant step beyond simple code generation toward autonomous analytical workflows (source: Microsoft Tech Community, January 2026). EU availability followed in February 2026 (source: Microsoft Tech Community, February 2026).

In February 2026, Microsoft consolidated its Copilot entry points in Excel, retiring the separate App Skills ribbon interface and merging those capabilities into Copilot Chat and Agent Mode. The rationale was straightforward: customers had been complaining that multiple Copilot entry points within the same application created a fragmented experience. However, the consolidation created a gap: the Advanced Analysis features -- which used Python in Excel for data analysis and visualization through App Skills -- are not yet available in Agent Mode or Copilot Chat. Microsoft is working on restoring that functionality but has not provided a specific timeline (source: Neowin, February 2026).

Whether AI-driven code generation ultimately changes how people interact with data is an open question, but the direction is clear. The boundary between "spreadsheet user" and "Python programmer" is getting thinner.

For now, the practical takeaway is this: if you work in Excel and you have ever hit the ceiling of what formulas, pivot tables, and built-in charts can do, Python in Excel removes that ceiling. It is not a full Python development environment, and it is not trying to be. It is Python's analytical engine, delivered through the interface that hundreds of millions of people already know how to use.

That is a meaningful thing.

pythoncodecrack.com -- Practical Python articles from fundamentals to advanced techniques. Real code, real examples, real understanding.
back to articles