San Francisco-based AI startup, Writer, has launched a new large language model (LLM) aimed at competing with major players like OpenAI and Anthropic. Unlike its competitors, Writer has managed to significantly cut down costs in developing its AI tools, spending only $700,000 on its latest model, much lower than the millions other companies invest. Writer is currently raising funds at a $1.9 billion valuation, reflecting a significant increase from its previous $500 million valuation in 2023.
Writer’s cost-saving strategy is largely attributed to its use of synthetic data. Synthetic data, which is generated by AI to replicate real-world information, reduces the need for vast amounts of human-created data. This approach allows Writer to train models efficiently while maintaining privacy and accuracy.
Synthetic data has become a growing trend in AI development, with companies like Amazon, Meta, and Microsoft-backed OpenAI also incorporating it. However, some experts caution that synthetic data must be used carefully to avoid biases or degrading model performance.
Growing Interest from Investors

With strong investor interest, Writer is currently raising funds at a $1.9 billion valuation, positioning itself as a major competitor in the AI industry. Writer’s efficient model development has captured investor attention. The company is reportedly raising up to $200 million at a valuation of $1.9 billion, nearly quadruple its valuation from September 2023 when it was worth over $500 million. This rapid growth reflects strong confidence in Writer’s innovative methods.
Writer unveiled its new generative AI tool, called Palmyra X 004, which was developed using its cost-efficient synthetic data pipeline. The company claims that this model outperforms competitors while addressing specific enterprise needs across industries like support, IT, operations, sales, and marketing.
CEO May Habib stated that traditional methods relying on massive datasets have limitations and that the future lies in precision training and innovative architecture. Writer’s approach, according to Habib, is focused on these areas and aims to fulfill critical enterprise demands.
Expanding Client Base
Writer’s generative AI models are already being utilized by more than 250 enterprise customers, including Accenture, Uber, Salesforce, L’Oreal, and Vanguard. These clients use Writer’s tools for various tasks like content generation, market analysis, and summarizing data.
As it continues to expand its enterprise customer base, Writer is currently raising funds at a $1.9 billion valuation, nearly quadrupling its worth from last year. As the generative AI market is expected to reach over $1 trillion in revenue within a decade, Writer’s cost-efficient model and synthetic data strategy position the company for continued growth in this competitive field. In 2024 alone, investors have poured $26.8 billion into generative AI projects, a testament to the sector’s booming potential.
Writer, a San Francisco-based AI startup, has made headlines with its innovative approach to developing large language models (LLMs). The company claims to have significantly reduced costs by using synthetic data, spending just $700,000 to develop its latest model, compared to the millions typically spent by competitors like OpenAI and Anthropic. While the cost-cutting strategy is commendable and has attracted investor attention, leading to Writer raising funds at a $1.9 billion valuation, there are important aspects that require a closer examination.
The use of synthetic data has allowed Writer to reduce its reliance on real-world data, which is becoming scarcer in the AI landscape. By mimicking human-generated data, Writer ensures privacy and lowers costs. However, the increasing use of synthetic data by major companies like Amazon and Meta raises concerns about the long-term viability of this approach.
Some experts warn that synthetic data can lead to degraded model performance and may even exacerbate existing biases. While Writer’s CTO, Waseem Alshikh, has emphasized that the company uses real factual data converted into synthetic data, it’s important to recognize that relying heavily on this method could present challenges as models scale up in complexity.