Cluely, a new AI firm based in San Francisco, has raised $5.3 million in seed funding and sparked a contentious discussion in the tech and education sectors. Chungin Lee, 21, launched the firm, which is gaining attention for its audacious boast that it allows users to “cheat on everything.” The program is made to provide private, in-the-moment support during examinations, sales calls, negotiations, and job interviews through a concealed in-browser window that interviewers and proctors cannot see. With support from well-known investors Susa Ventures and Abstract Ventures, Cluely’s quick ascent highlights the enormous need for AI-driven productivity products as well as the ethical issues they may raise.
From University Suspension to Startup Success:
Cluely’s origins trace back to Lee’s days as a student at Columbia University. Along with co-founder Neel Shanmugam, Lee initially developed a tool called Interview Coder, which was specifically designed to help software engineers pass technical interviews by providing real-time, hidden AI-generated answers. The tool quickly gained notoriety after Lee posted a viral thread on X (formerly Twitter), revealing that he had been suspended from Columbia for his involvement in the project. The university’s disciplinary action was triggered not just by the tool itself, but by Lee’s decision to share recordings and details of his disciplinary hearing online.
Rather than stalling his ambitions, the suspension became a catalyst for Lee. He and Shanmugam left Columbia and expanded their vision, rebranding their project as Cluely and broadening its capabilities to cover a wide range of scenarios beyond just technical interviews. The company’s manifesto positions Cluely as a natural evolution in human-computer collaboration, likening it to the early days of calculators, spellcheck, and Google—tools that were once controversial but are now widely accepted.
How Cluely Works and Why It’s Stirring Debate:
The main product of Cluely is an AI-powered interface that quietly provides users with solutions and recommendations in high-stakes situations. Through a hidden browser window that is invisible to others, users can get Cluely’s aid during a sales pitch, online test, or virtual interview. The technology is especially appealing—and controversial—because of its design, which makes it impossible for even watchful interviewers or proctors to detect its use.
The startup’s marketing strategy has further fueled the flames. Some viewers saw the debut video’s representation of Lee utilizing Cluely to pretend to be an expert in art and even to lie about his age as a satirical swipe at the growing prevalence of AI in daily life. However, some perceived it as a concerning normalizing of dishonesty. Critics caution that Cluely’s tool could undermine confidence in academic and professional evaluations, while its creators contend that it is just the next step in using AI for personal gain.
The criticism has been vocal and quick. Concerns have been expressed by educators, hiring managers, and ethicists regarding Cluely’s technology’s potential to compromise the fairness of tests and interviews by providing users an unfair advantage and disregarding actual ability and knowledge. The startup’s strategy has been compared by some to a scene from “Black Mirror,” in which technology makes it difficult to distinguish between help and dishonesty.
Conclusion:
The rise of Cluely and its quick funding mark an important turning point in the development of AI technology. The startup’s goal to “cheat on everything” goes against established standards of integrity, trust, and quality. Some consider it a threat to the fundamentals of fair competition, while others regard it as a creative, if provocative, extension of human-computer collaboration.
The argument over the moral use of AI in tests and professional contexts is expected to grow more intense as Cluely grows in size and other tools of a similar nature follow. The company’s trajectory will be widely observed, not only for its technological innovation but also for the way it influences the discourse on what constitutes appropriate behavior in the era of artificial intelligence.