时间:2025-04-04 09:45:36 来源:网络整理编辑:綜合
Apple is dabbling in AI image-editing with an open-source multimodal AI model.Earlier this week, res
Apple is dabbling in AI image-editing with an open-source multimodal AI model.
Earlier this week, researchers from Apple and the University of California, Santa Barbara released MLLM-Guided Image Editing, or "MGIE;" a multimodal AI model that can edit images like Photoshop, based on simple text commands.
On the AI development front, Apple has been characteristically cautious about its plans. It was also one of the few companies that didn't announce any big AI plans in the wake of last year's ChatGPT hype. However, Apple reportedly has an in-house version of a ChatGPT-esque chatbot dubbed "Apple GPT" and Tim Cook said Apple will be making some major AI announcements later this year.
SEE ALSO:Tim Cook says big Apple AI announcement is coming later this yearWhether this announcement includes an AI image editing tool remains to be seen, but based on this model, Apple is definitely doing some research and development.
While there are already AI image editing tools out there, "human instructions are sometimes too brief for current methods to capture and follow," said the research paper. This often leads to lackluster or failed results. MGIE is a different approach that uses MLLMs, or multimodal large language models, to understand the text prompts or "expressive instruction," as well as image training data. Effectively, learning from MLLMs helps MGIE understand natural language commands without the need for heavy description.
In examples from the research, MGIE can take an input image of a pepperoni pizza and using the prompt, "make this more healthy" infer that "this" is referring to the pepperoni pizza and "more healthy" can be interpreted as adding vegetables. Thus, the output image is a pepperoni pizza with some green vegetables scattered on top.
In another example comparing MGIE to other models, the input image is a forested shoreline and a tranquil body of water. With the prompt "add lightning and make the water reflect the lightning," other models omit the lightning reflection, but MGIE successfully captures it.
MGIE is available as an open-source model on GitHub and as a demo version hosted on Hugging Face.
TopicsAppleArtificial Intelligence
Chinese gymnastics team horrifies crowd with human jump rope2025-04-04 09:35
Google Earth VR is the godlike virtual reality experience we've been waiting for2025-04-04 09:28
What will Trump's presidency mean for technology?2025-04-04 09:22
The power of public shaming in Trump's America2025-04-04 09:20
Major earthquake and multiple aftershocks rock central Italy2025-04-04 09:15
Murderous K2025-04-04 08:15
NFL player's reaction to frightening head injury is heartbreaking2025-04-04 08:06
Nike's self2025-04-04 08:02
Pokémon Go is so big that it has its own VR porn parody now2025-04-04 07:57
The 'Game of Thrones' wine that would make Tyrion proud2025-04-04 07:45
PlayStation Now game streaming is coming to PC2025-04-04 09:37
Cops give brutally honest advice on how to avoid this internet scam2025-04-04 09:29
Meet the female Instagram artists reclaiming the meme2025-04-04 09:15
Tesla is powering an entire island with solar energy, NBD2025-04-04 09:10
Katy Perry talks 'Rise,' her next batch of songs, and how to survive Twitter2025-04-04 09:04
Even kids on Club Penguin staged an anti2025-04-04 08:57
Crowdfunding is giving parents all the baby gear they didn't know they needed2025-04-04 08:55
Grandma's wrong number Thanksgiving invitation ends in the best way possible2025-04-04 07:27
Fiji wins first2025-04-04 07:12
Here's Tim Cook's letter to Apple employees after Trump's win2025-04-04 07:10