The past week saw several important updates in the artificial intelligence sector, with companies and researchers pushing the boundaries of how AI is used in video creation, coding, accessibility, and even mind-computer interaction. These developments also brought along new ethical and legal concerns, reminding us that the progress of AI continues to raise difficult questions for society, businesses, and regulators alike.
Midjourney Launches First Video Model
On June 20, 2025, Midjourney introduced its first video-generation model, titled V1. The tool enables users to generate short, stylised video clips from still images by using text-based prompts. The company, best known for its image-generation capabilities, now enters the competitive space of video creation, going up against other platforms such as Runway.
Early user feedback has praised the model’s creative results, especially for artistic and surreal content, although quality and coherence can vary depending on the complexity of the input. Some have raised concerns about the computing power needed to generate these clips and whether copyright rules can fully address issues related to AI-generated video content.
ChatGPT Introduces Voice Record Mode
Meanwhile, OpenAI added a new voice recording feature to ChatGPT. This update, announced on June 19, allows users to interact with the chatbot by speaking instead of typing. Available for both Android and iOS users, the feature has been appreciated by those with physical disabilities or users who find voice commands more convenient.
However, concerns have surfaced about the handling and storage of voice data. Although OpenAI claims that encryption is in place, critics argue that better transparency is needed to earn user trust. This update strengthens ChatGPT’s role as a tool capable of accepting and processing multiple forms of input.
Claude Builds Code on MCP Servers
Anthropic’s Claude AI also made headlines after a developer demo showed the model writing and deploying code directly onto MCP servers. This update signals Claude’s growing capabilities in software development, and users online have already discussed its potential to reduce manual work in programming.
But there is also fear that such autonomy could be exploited if security protocols are not followed closely. The company says safety remains a priority, but experts warn that automated coding tools must be monitored carefully, especially when used in sensitive systems.
Mind-Reading AI Advances in Sydney
In Australia, researchers in Sydney have developed an AI system that can read brain signals and convert them into spoken words with a 75% accuracy rate. The technology, designed with people who have speech impairments in mind, uses brain-computer interfaces and currently relies on implants. While the research shows promise, the invasive nature of the technology and the sensitive data it processes have raised fresh concerns about privacy and ethical consent. Experts say clear regulations will be necessary before this kind of interface becomes available to the public.
Apple Faces Shareholder Lawsuit Over AI Claims
Lastly, Apple has found itself in legal trouble after shareholders filed a class-action lawsuit claiming the company overstated its AI development progress, particularly in relation to Siri. The lawsuit, filed in California, accuses Apple of misleading investors and claims these overstatements led to weakened iPhone sales and declining stock performance. While some users have supported Apple’s slow and careful approach to AI, others argue that the company failed to deliver on its promises in time to stay ahead of competitors.