Apple WWDC23 does not mention "artificial intelligence" and prefers to use "machine learning"
On June 6th, Apple announced in its WWDC 2023 keynote speech on Monday that in addition to releasing highly anticipated new products such as Mac Pro and Vision Pro, it also showcased its latest progress in machine learning. However, IT Home noticed that unlike competitors such as Microsoft and Google who vigorously promote generative artificial intelligence, Apple did not mention the term "artificial intelligence" in its speech, but instead used more terms such as "machine learning" and "ML".
For example, in the demonstration of iOS 17, Craig Federighi, Senior Vice President of Software Engineering, introduced improvements in automatic error correction and speech recognition:
"Automatic error correction is driven by machine learning on the device, and over the years, we have continuously improved these models. The keyboard now utilizes a Transformer language model, which is currently the most advanced word prediction technology, making automatic error correction more accurate than before. Moreover, with the powerful performance of the Apple Silicon chip, the iPhone can run this model every time you press a key."
It is worth noting that Apple mentioned a term "transformer" in the field of artificial intelligence in its keynote speech. The company specifically talked about a "transformer language model", which means that its artificial intelligence model uses the transformer architecture, which is the underlying technology used by many recent generative artificial intelligence, such as DALL-E image generators and ChatGPT chat robots. Transformer model (a concept first proposed in 2017) is a neural network architecture for natural language processing (NLP), which uses self attention mechanism to enable it to process different words or elements in the sequence first. It can process inputs in parallel, significantly improving efficiency and achieving breakthrough progress in NLP tasks such as translation, summarization, and Q&A.
According to Apple, the new transformer model in iOS 17 can achieve sentence level automatic error correction. When you press the spacebar, it can complete a word or entire sentence. We will also learn based on your writing style to guide their suggestions. Apple also stated that speech recognition "adopts a transformer based speech recognition model that utilizes neural engines to make speech recognition more accurate."
In its keynote speech, Apple also mentioned "machine learning" multiple times, such as describing the new iPad lock screen feature ("When you choose a Live Photo, we use an advanced machine learning model to synthesize additional frames"); IPadOS PDF function ("Thanks to the new machine learning model, iPadOS can recognize fields in PDF, allowing you to quickly fill in information such as names, addresses, and emails obtained from your contacts using automatic filling function; AirPods adaptive audio function ("By personalized volume, we use machine learning to understand the changes in your listening preferences over time"); And the Apple Watch widget feature Smart Stack ("Smart Stack uses machine learning to display relevant information to you when you need it").
Apple has also launched a new application called Journal, which can provide personality through device side machine learning.
Apple has also launched a new application called Journal, which utilizes device side machine learning to provide personalized suggestions and bring diary inspiration to users. These suggestions are intelligently generated based on users' recent activities, including photos, people, locations, physical training, etc., to help users start recording more easily.
Finally, during the demonstration of the Vision Pro head display, the company revealed that the dynamic image on the user's eyes is a special 3D head created by scanning your face - of course, this is also a credit to machine learning.
Leave a comment
All blog comments are checked prior to publishing