Meta has started installing tracking software on the work laptops of its US-based employees. It captures keystrokes, mouse movements, clicks, and occasional screenshots. The captured activity is fed back into training data for AI agents. There is no opt-out. The program was disclosed to staff in an internal memo in April 2026, and the response from inside the company has been about what you would expect.

The program is called the Model Capability Initiative, and it sits under a broader effort branded as the Agent Transformation Accelerator within Meta’s Superintelligence Labs, the AI group now led by former Scale AI CEO Alexandr Wang.

What the Software Captures

The tool runs on a designated list of work apps and websites rather than everything on the machine. Reported targets include Gmail, Google Chat, Visual Studio Code, and Meta’s internal assistant platform Metamate. Within those apps, it records:

  • Keystrokes
  • Mouse movements and clicks
  • Occasional screenshots

Meta’s stated purpose is narrow: build AI agents that can actually operate software the way people do. A spokesperson framed it as a data problem. If you want models that can navigate dropdown menus, use keyboard shortcuts, and handle the small mechanical steps that make up white-collar work, you need real examples of people doing those things. Synthetic data does not cover it. Public internet data does not cover it. What covers it is watching actual employees use their actual tools.

The company says managers cannot access the data and that it will not be used for performance evaluation. That framing is the thing employees have been asked to trust.

No Opt-Out

Meta CTO Andrew Bosworth addressed the program on an internal channel and was blunt about the terms: there is no option to opt out of this on a work-provided laptop. His broader framing was equally direct - he described a future where Meta’s agents primarily do the work, and employees mostly direct, review, and improve them. The surveillance and the long-term plan are the same story told from two angles. Capture the work now, automate the work later.

The internal response, by multiple accounts, involved a flurry of crying, shocked, and angry emoji reactions on his message. Nobody inside Meta appears to have been surprised that this would be unpopular. It was announced anyway.

Why Only the US

The scope is US employees only. The reason is regulatory, not philosophical. EU-based employees are protected by the General Data Protection Regulation, which treats keystroke logging and continuous workplace monitoring as a legal minefield that requires explicit legal bases, employee notification, data protection impact assessments, and meaningful proportionality arguments. UK employees fall under the UK GDPR and Information Commissioner’s Office guidance on monitoring workers, which is similarly restrictive.

In the United States, there is effectively no federal limit on workplace surveillance. Yale law professor Ifeoma Ajunwa put it plainly to Reuters: there is no limit on worker surveillance in the US at the federal level. A handful of states require notice, but notice is not consent, and on a company-provided device the legal baseline is that the employer can monitor essentially whatever it wants.

So the map of who gets tracked is really a map of who the law does not protect.

The Irony Problem

Meta is not a neutral actor being asked to weigh surveillance for the first time. This is a company whose business model is built on watching people, and whose regulatory history with the FTC, the Irish Data Protection Commission, and others reads as a long list of privacy enforcement actions. Employees subject to this program are now inside the same kind of data collection apparatus the company has spent two decades arguing is fine when applied to users.

The Register’s coverage leaned into this directly, noting that staff get to experience the unease users have felt for years. That framing is unkind but fair. The company’s defence of the program - that the data has a narrow purpose, is protected by safeguards, and is not used against the person it is collected from - is a compressed version of the defences it has offered about its consumer products. Whether that defence is more convincing when the subjects are employees rather than users is an open question.

What This Tells You About Agent Training

Step back from the Meta-specific details and the broader picture is useful. The frontier labs are all trying to build agents that can use computers the way humans do, and all of them have the same fundamental problem: the best training data for “how a knowledge worker uses a computer” is a knowledge worker using a computer.

OpenAI has been building computer-use capabilities for a while. Anthropic shipped computer use in 2024 and has been iterating since. Microsoft has built specialised cloud PCs positioned for agent workloads. Every one of these systems needs examples of real software use, and every one of them has to source that data somehow.

Meta’s approach is to source it from its own workforce. That is cheaper and more controllable than licensing data or paying contractors, and it has the convenient property that the people being recorded signed employment agreements that, on a US work laptop, let the company do it. But it also means the training data pipeline for Meta’s next generation of agents runs directly through the daily work of people who would rather not be in the pipeline.

Why the Reaction Matters

The interesting signal here is not that Meta is doing surveillance - Meta has always been comfortable with surveillance. The signal is that the internal reaction has been bad enough to leak, and that the company went ahead anyway. That is what tells you how valuable the training data is perceived to be.

If the marginal value of another hour of human computer-use data were small, you would never accept the PR cost of forcing it through. The fact that Meta is accepting that cost - and that its peers are working on the same problem from different angles - is a reasonable indicator of where the bottleneck actually sits in agent development right now. It is not reasoning. It is not context windows. It is the long tail of clicks, drags, tab-switches, and menu navigations that make up the actual texture of work.

For employees, the more uncomfortable implication is the one Bosworth stated openly: agents primarily do the work, humans direct and review. The tracking software is a training signal for the system that is being built to reduce headcount. Meta announced the program at roughly the same time as reported plans for further workforce reductions, which is not subtle.

What to Watch

A few things worth keeping an eye on as this plays out:

  • Regulatory response in US states. California, New York, and Illinois have stronger employee privacy laws than the federal floor. If this triggers state-level action, other large employers will notice.
  • Scope creep. The program is bounded today to a list of work apps. “List of work apps” is the kind of scope that tends to widen.
  • Who copies the model. If Meta gets useful training data out of this and survives the PR, expect quieter versions of the same program at other frontier labs and large tech employers. The incentive is identical.
  • Employee departures. Whether this shifts who joins Meta, and who leaves, will be visible over the next few quarters in the usual ways.

The Model Capability Initiative is not, by itself, a uniquely dystopian program. It is a relatively ordinary use of the monitoring powers a US employer already has, applied to an unusually valuable data problem. What makes it worth paying attention to is how cleanly it lines up the three things AI labs usually try to keep separate in public: the data they need, the workers they plan to automate, and the legal asymmetry that lets them collect the first from the second.

Further Reading