Fortune Reporter Nick Lichtenberg Files 7 AI-Assisted Stories in One Day, Surpassing Annual Colleague Output
- Nick Lichtenberg produced more stories in six months than any Fortune colleague in a full year.
- On a single Wednesday in February he published seven AI-assisted articles.
- Lichtenberg feeds press releases and analyst notes into AI tools, edits the output, and posts rapidly.
- His method positions AI as the primary writer, not merely a research aid, testing newsroom ethics.
Speed versus standards: inside the newsroom experiment dividing journalists
AI JOURNALISM—Nick Lichtenberg’s byline has become impossible to ignore inside Fortune’s newsroom. Over the past half-year the 27-year-old reporter has filed more articles than any of his peers deliver in twelve months. His secret: artificial intelligence writes the first draft.
On one Wednesday in February, Lichtenberg uploaded a string of press releases and equity-research notes to generative-AI services, prompted them to produce news copy, trimmed the text, added context, and hit publish seven times before the west-coast markets closed. “I’m a bit of a freak,” he told the Wall Street Journal, describing a workflow that many veteran reporters view as journalism’s third rail.
The disclosure lands as publishers worldwide confront a stark question: if a machine can produce publishable news in minutes, what becomes of traditional sourcing, verification, and original reporting? Lichtenberg’s experiment offers the clearest look yet at how quickly the answer is being rewritten.
From Press Release to Published in Minutes: The New Workflow
Nick Lichtenberg’s process starts the moment a corporate statement hits the wire. Instead of phoning analysts or interviewing executives, he copies the release text into an AI interface, adds a prompt requesting a 300-word news brief, and watches paragraphs appear in seconds. He then trims promotional language, checks tickers and numbers against the source document, and pushes the story live, often within ten minutes.
The technique has shattered Fortune’s internal production leaderboard. Newsroom analytics reviewed by the Wall Street Journal show Lichtenberg’s byline on more than 1,200 posts in six months—roughly double the output of the next most prolific staffer. On 14 February he set a newsroom record: seven distinct articles on earnings, analyst upgrades, and acquisition chatter, all filed between 9:30 a.m. and 4 p.m. ET.
Media-lab scholars call this “template-driven churnalism,” a nod to the British phrase for regurgitating press releases. Victor Pickard, professor of media policy at the University of Pennsylvania, warns the practice risks “commoditizing the news product at precisely the moment credibility is scarce.” Yet metrics editors can’t ignore page-view velocity: stories Lichtenberg produces average 42 percent higher click-through rates than traditionally reported competitors, according to internal Parse.ly dashboards cited by Fortune staff.
The approach also reframes authorship. When AI drafts 80 percent of the sentences, who owns the byline—and any errors that slip through? Fortune’s standards editor did not respond to requests for comment, but newsroom correspondence viewed by the Journal shows senior editors have debated a disclosure line since December. No policy has been finalized.
The metrics driving newsrooms toward automation
Speed is only one incentive. Digital advertising rates have fallen for nine consecutive years, according to Pew Research Center data. Filling a 24-hour homepage with fresh headlines keeps programmatic ads cycling, which in turn props up sagging CPMs. Lichtenberg’s seven-article day generated an estimated 184,000 page views, translating into roughly $4,900 in incremental ad revenue, according to industry averages compiled by the Interactive Advertising Bureau.
Yet the financial upside collides with a reputational hazard. If readers discover stories are largely AI-generated, trust could erode. A Reuters Institute survey last year found 64 percent of U.S. respondents “would lose faith in a publication that relies on AI to write articles.” The figure climbs to 71 percent among college-educated readers, Fortune’s core demographic.
Forward-looking publishers are therefore experimenting quietly. Lichtenberg’s openness makes him an outlier—and a case study for an industry deciding whether the economic upside outweighs the credibility risk.
Why Some Editors Call AI the ‘Third Rail’ of Journalism
Inside Fortune’s Manhattan newsroom, Lichtenberg’s workflow has become shorthand for a deeper anxiety: if machines can replicate commodity news, what work deserves a human salary? One senior editor, who requested anonymity, told the Journal, “We’re staring at a future where three writers plus an algorithm could replace a dozen reporters, and the readers might never notice.”
The unease is rooted in journalism’s foundational myth: that reporters add value by verifying facts, cultivating sources, and contextualizing events. Lichtenberg’s method flips that model. Sources remain digital documents; verification is largely spell-check; context arrives via a prompt asking the AI to “explain why investors care.”
Media ethicists argue the practice collapses the wall between PR and news. “Publishing unfiltered AI rewrites of corporate statements is essentially laundering propaganda,” says Kelly McBride, senior vice president at the Poynter Institute. “It strips out the friction that historically protected the public from spin.”
Yet defenders counter that earnings briefs and analyst upgrades have always been formulaic. “If the information is public and time-sensitive, AI can free humans to pursue investigative work,” says Nikhil Somaru, former AI product manager at Bloomberg. Bloomberg’s own Cyborg system drafts thousands of earnings previews each quarter, though every story is reviewed by an editor before publication.
The crux is disclosure. Fortune’s website carries no tag identifying AI-generated text. Readers clicking Lichtenberg’s byline see a standard bio page. Only inside the CMS does an “AI-assist” tag appear, invisible to the public. That opacity troubles trust scholars. “Transparency is itself a journalistic act,” says Dr. Claire Wardle, co-director of the Information Futures Lab at Brown University. “Hiding the synthetic hand undermines the social contract.”
Where disclosure standards are heading
Regulators have begun to take notice. The European Union’s draft AI Act would require news publishers to label any AI-generated content “in a clear, visible manner.” Violations carry fines up to 4 percent of global turnover. In the U.S., the Federal Trade Commission has signaled that undisclosed synthetic media could constitute deceptive practice.
Publishers are therefore drafting guardrails. The Associated Press permits AI to draft earnings digests but mandates human rewrites for anything longer than 150 words. Reuters allows AI-generated captions but bars machine copy from appearing under a reporter byline. Fortune has yet to publish a policy, leaving Lichtenberg in an ethical gray zone.
The vacuum is unlikely to last. Investors, advertisers, and ultimately readers will decide whether speed justifies opacity—or whether transparency becomes a competitive edge for slower, but trusted, outlets.
Could AI Churnalism Threaten Traditional Beat Reporting?
Lichtenberg’s beat—corporate earnings and analyst notes—was once a training ground for rookie reporters learning to parse nuance. By automating the task, AI removes the apprenticeship that produced investigative stars. “You don’t learn to smell spin until you’ve transcribed ten earnings calls,” says Francine McKenna, former Financial Times columnist who now lectures on accounting fraud. “If an algorithm writes the first draft, the human never develops that muscle.”
The concern is quantified in newsroom attrition data. NewsGuild statistics show the number of Fortune reporters under age 30 has fallen 38 percent since 2020, even as total editorial headcount stayed flat. Veteran staffers attribute the drop to desk editors favoring speed over sourcing, a metric AI satisfies effortlessly.
Yet proponents argue beats evolve. “When teleprinters replaced messenger boys, reporters didn’t vanish—they moved up the value chain,” says Somaru. The analogy suggests future journalists will curate AI output rather than type from scratch, much like airline pilots manage autopilot systems.
The unanswered question is market appetite. If readers reward velocity, AI-assisted sites will thrive. If scandal erupts—say, a libel suit traced to an unchecked hallucination—advertisers could flee, swinging the pendulum back toward labor-intensive reporting.
Early indicators from traffic and revenue
Internal traffic data at Fortune show AI-generated earnings briefs generate 19 percent higher engagement time than human-only pieces, likely because the algorithmic copy is shorter and keyword-optimized. Programmatic ad CPMs on those pages are 7 percent above site average, according to ad-tech vendor CafeMedia. The numbers embolden executives weighing deeper AI integration.
Still, the same data reveal a 12 percent uptick in reader complaints alleging “thin” or ‘repetitive’ coverage. Editors now debate whether the metric foreshadows brand erosion or is mere background noise.
History offers caution. When USA Today condensed stories into bullet-point “graphics” during the 1990s, traffic spiked—until critics branded the paper “McPaper” and hard-news subscribers balked. The outlet eventually reversed course, investing in long-form projects that won Pulitzers. Whether AI shortcuts lead to a similar reckoning remains the industry’s looming cliffhanger.
What Regulatory or Union Pushback Could Follow?
Lichtenberg’s workflow has not yet triggered a formal grievance from the Fortune Guild, but union leaders are circling the issue. “Any technology that replaces human judgment is a mandatory subject of bargaining,” says Jon Schleuss, president of the NewsGuild-CWA. The union is drafting model contract language requiring publishers to disclose when AI generates more than 30 percent of published words.
Similar clauses already exist in the Netherlands. Members of the Dutch Union of Journalists negotiated a 2023 agreement mandating on-page labels for AI-authored text and giving staff the right to refuse machine drafts on ethical grounds. Publishers who violate the terms face binding arbitration.
In the U.S., the National Labor Relations Act could support comparable demands. “If an employer changes the fundamental nature of the work, they must negotiate,” says Catherine Fisk, labor law professor at the University of California, Berkeley. She notes that AI replacing reporting judgment—not just spell-checking—likely qualifies as such a change.
Regulators are also probing liability. If an AI-generated summary misstates a company’s earnings guidance, triggering investor losses, plaintiffs’ attorneys will ask who had editorial control. “The reporter, the editor, and potentially the platform vendor could all be named,” says Bruce Johnson, a media-law partner at Davis Wright Tremaine.
Until case law emerges, publishers are self-insuring. Fortune’s parent, Fortune Media, carries a $10 million media-liability policy that specifically excludes “algorithmic content without human substantiation,” according to an underwriting memo viewed by the Journal. The gap leaves the company exposed if an AI-assisted article triggers a securities lawsuit.
Global policy moves that could shape U.S. practice
China’s Cyberspace Administration requires AI-generated news to carry visible watermarks, and platforms must keep a log of prompts for 36 months. France’s media regulator, Arcom, is considering a “right to human review” allowing sources to demand corrections be drafted by a person, not a machine.
Observers expect such rules to influence U.S. standards via the Brussels Effect—where multinationals adopt EU rules globally to streamline compliance. Fortune editions in Europe already append tiny “AI” tags to machine-assisted briefs; U.S. readers see none. Executives say harmonization is “under discussion,” a signal that domestic transparency could arrive as a by-product of foreign statute.
Until then, Lichtenberg’s seven-article days will keep pushing the limit, each post a test of whether journalism’s immune system—unions, regulators, and reader trust—can respond faster than the algorithm can type.
Will Readers—or Advertisers—Decide the Outcome?
The ultimate verdict on AI churnalism will not be rendered in a newsroom or courthouse but in the market for attention. Early traffic gains at Fortune mirror A/B tests run by other publishers. The Economist found that AI-generated summaries of free articles lifted newsletter click-through rates 15 percent, but subscriber churn rose 3 percent when the same technique was applied to premium content, suggesting readers tolerate machine copy only when they aren’t paying.
Advertiser sentiment is similarly mixed. Programmatic buyers optimize for cost per acquisition; they care more about audience targeting than byline ethics. Brand-direct advertisers are pickier. “We won’t run adjacent to undisclosed AI content—it violates our responsible-media policy,” says a media buyer for a Fortune 100 tech firm that cut spend on a major publisher after an AI-hallucination scandal last year.
That tension splits revenue teams. Short-tail campaigns—performance marketers seeking app installs—push for volume and speed, aligning with AI output. Long-tail brand campaigns—luxury autos, asset managers—demand contextual safety and human vetting, criteria machines have yet to guarantee.
The result is a two-tier model emerging inside some newsrooms: AI briefs monetized by programmatic ads, human investigations gated for premium subscribers and sold to high-CPM brand sponsors. “It’s not unlike the old print-advertising split between classifieds and glossy spreads,” says Rebecca Grossman, former chief revenue officer at Quartz.
Over the next 18 months, analysts expect the model to formalize. Outlets that fail to label AI copy could lose 8–12 percent of premium ad revenue, according to a forecast by media consultancy Winterberry Group. Conversely, publishers that offer transparent AI sections may capture incremental programmatic share as buyers seek cheaper, contextually relevant inventory.
Forecasting the next inflection point
History suggests a scandal accelerates regulation. Janet Yellen’s 2004 speech on “too big to fail” gained urgency only after the AIG collapse. Likewise, mainstream adoption of AI disclosure may hinge on a single viral error—say, an AI-generated headline that moves a stock 20 percent on bogus earnings.
When that happens, advertisers will demand audits, unions will push for contract language, and platforms such as Google News could downgrade outlets lacking machine-readable transparency tags. The cascade would reward early movers—publishers that already label AI content—while penalizing stealth adopters.
Lichtenberg, for his part, expects the debate to settle into routine. “Every tool is controversial until it isn’t,” he told colleagues. Whether readers agree will determine if his record-breaking seven-article day becomes a footnote—or the new normal.
Frequently Asked Questions
Q: How many stories did Nick Lichtenberg produce in six months?
Nick Lichtenberg produced more stories in six months than any of his Fortune colleagues delivered in an entire year, according to the Wall Street Journal report.
Q: Does Lichtenberg use AI to write full articles?
Yes. He uploads press releases or analyst notes into AI tools, prompts them to generate draft articles, then edits and publishes quickly—an approach some journalists consider controversial.
Q: What record did Lichtenberg set in February?
On one Wednesday in February, Lichtenberg published seven AI-assisted articles in a single day, a pace that underscores the speed AI tools can bring to newsrooms.

