YouTube is expanding its Creator Music feature with new AI features, including an AI-dubbing tool, to help creators find and use music in their videos more easily.
Starting early next year, YouTube will launch a new feature that will work like a music concierge. Creators can simply type in a description of the video they are working on, such as the length or type of song they are looking for, and the Creator Music tool will suggest the right track at the right price.
This is a significant improvement over the current system, which requires creators to search for songs by title, artist, or genre. The new AI-powered feature will save creators time and help them find the perfect music for their videos more easily.
In addition, YouTube is also introducing an AI-dubbing tool called Aloud, which will be integrated into YouTube Studio. Aloud allows creators to generate a dub of their video in another language with just one click. This can be a huge time-saver for creators who want to reach a global audience.
Aloud is currently testing with select creators and will be open to more creators next year. YouTube previously announced its plans to integrate Aloud with YouTube at VidCon earlier this year.
The new AI features in Creator Music are part of YouTube’s broader effort to make it easier for creators to produce high-quality content. YouTube is also launching a new creator app and a generative AI feature for Shorts.
The new AI features in Creator Music are a welcome addition for creators. They will make it easier for creators to find and use music in their videos, which can help them to produce more engaging and popular content.
Meta is pushing deeper into AI territory with new AI-editing tools in Instagram Stories, where users can edit images and videos simply by typing in what they want to modify. From hair color to special effects, the feature upends the possibilities of creators and regular users alike to personalize their content.
Text Prompts Meet Visual Creativity
Until now, Instagram’s AI editing tools were primarily accessible through Meta AI’s chatbot, which required users to interact via direct messages. With this latest integration, however, AI editing becomes native to Stories, allowing anyone to make instant visual edits using plain language commands.
These new edit features come under the “Restyle” menu that can be accessed using the paintbrush icon in Instagram Stories. One can type commands such as “give me a sunset background,” “remove the person in the corner,” or “color my hair pink.” The AI carries out the edit one wants within seconds.
Meta suggests that users only have three primary actions to select from — Add, Remove, or Change — while specifying what they’d like to alter. The AI will automatically add objects, alter appearances, or completely restyle the photo based on what they’ve described.
Preset Effects and Dynamic Video Edits
In addition to custom prompts, Instagram also has pre-select AI effects that can beautify or stylize posts. Filters like sunglasses, a denim jacket, or even a watercolor art effect can be applied.
On video content, the feature does even better — creators are able to superimpose atmospheric effects like falling snow, glowing embers, or cinematic lighting, which makes Stories appear more polished and professional without the necessity of using editing apps.
Privacy and AI Usage Terms
While the new features enable creativity, they come with privacy implications. Being used to introduce users to Meta’s Terms of Service for AI, which allow the company to “analyze photos and videos, including facial data, to make AI better.” According to Meta, it allows its systems to “summarize image contents, edit images, and generate new content based on the image.”
Critics have also had concerns regarding the ways in which such data might be used to train Meta’s broader AI models, though the company has sworn to remain committed to responsible innovation and transparency.
Meta’s Expanding AI Push
The release of AI editing software is just part of Meta’s overall strategy to roll out artificial intelligence on every platform it has, from Facebook and Instagram to WhatsApp. Recently, Meta began beta-testing a “Write with Meta AI” feature, which helps users compose intelligent or engaging comments under Instagram posts.
Meanwhile, Meta’s separate Meta AI app — with its chatbot and new “Vibes” AI-generated video stream — has been picking up steam. According to Similarweb estimates, iOS and Android daily active users rose from 775,000 to 2.7 million over a four-week span as of October 17.
Protecting Younger Users
As a response to increasing complaints from regulators and parents, Meta has also added new parental tools for its AI features. Parents may now shut off chats with AI characters and filter topics that their teens have with the chatbot to provide a safer online environment.
With these new instruments, Instagram is not only emerging as a social network but a creative platform fueled by generative AI. With Meta, OpenAI, and Google competing for leadership, this launch shows how AI is becoming more a part of the social fabric of our era — blurring the line between creativity, technology, and self-expression.
Meta is rolling out red carpet treatment for AI startups with its new Llama for Startups initiative—offering cash, technical support, and exclusive access to its AI engineering team. But beneath the generous facade lies a fierce battle for dominance in the trillion-dollar generative AI market.
What Startups Get From Meta’s Program
💰 **Up to 36,000∗∗(36,000∗∗(6K/month for 6 months) in cloud credits
🤝 Direct engineering support from Meta’s Llama team
🔧 Early access to custom Llama model fine-tuning tools
🌐 Networking with other AI-first startups
Eligibility requirements are surprisingly accessible:
U.S.-based incorporation
Less than $10M in total funding
At least one developer on payroll
Building generative AI products
Deadline to apply: May 30, 2024
Why Meta Needs Startups More Than Ever
Despite 1 billion+ Llama downloads, Meta faces mounting pressure:
🔥 Competitive Threats
Google’s Gemini and Anthropic’s Claude dominate enterprise adoption
OpenAI’s GPT-4o leads in multimodal capabilities
Mistral, DeepSeek, and Alibaba’s Qwen are winning open-source favor
🚨 Recent Llama Stumbles
Llama 4 Behemoth delayed due to underperformance (WSJ)
Benchmark cheating allegations on LM Arena leaderboard
Public vs. “optimized” model discrepancies eroding trust
💸 Meta’s Make-or-Break AI Bet
Projecting 2B−2B−3B AI revenue in 2025
Banking on 460B−460B−1.4T by 2035 (yes, trillion)
Spending $900M+ annually just on GenAI R&D
The Hidden Strategy Behind the Startup Play
This isn’t just altruism—it’s a three-pronged chess move:
Lock-In Future Customers Startups that build on Llama today become enterprise buyers tomorrow.
Crowdsource Innovation Early adopters essentially beta-test new Llama capabilities for free.
Combat Open-Source Defections With alternatives like Mistral gaining traction, Meta needs to make Llama indispensable.
What’s Really at Stake?
Meta’s playing a long infrastructure game:
60B−60B−80B earmarked for 2025 data centers
Revenue-sharing deals with cloud providers hosting Llama
Future Llama API monetization (Zuck hinted at ads/subscriptions)
For startups, the calculus is simple: ✅ Free money and support in a cash-strapped AI winter ❌ Risk of vendor lock-in as Llama evolves
Should Your Startup Apply?
The case for jumping in:
If you’re already using Llama, this is free acceleration
Early access could provide competitive edge
Meta’s engineering insights are gold dust for product refinement
Reasons to hesitate:
$36K doesn’t go far with today’s GPU costs
Potential IP concerns working closely with a tech giant
Llama’s long-term roadmap remains uncertain
The Bottom Line
Meta’s throwing a Hail Mary to cement Llama as the open-weight model of choice. For scrappy AI startups, it’s a rare chance to piggyback on Meta’s war chest—just don’t mistake it for a long-term partnership.