With competitive advancement in generative AI models such as ChatGPT evoking fears about how AI will shape our near future, Apple seems to have emerged as a leader pointing in the direction of how AI and machine learning can actually be used for the greater good.
On May 16, Apple announced several new features for cognitive accessibility for the iPhone and iPad. These features are designed to help those who have problems with speech, hearing, or vision. Apple has always had its 'Accessibility' feature to help the differently-abled better navigate a phone and these updates are only expected to make iPhones and iPads more accessible to all.
Advertisement
What are the features?
Live Speech
Personal Voice
Point and Speak in Magnifier
Pause images with moving elements in Messages and Safari and more
Among the above features, two have gained much attention. Apple is flexing its AI and machine learning skills with the announcement of Personal Voice and Live Speech.
...users with cognitive disabilities can use iPhone and iPad with greater ease and independence with Assistive Access...
- Apple
What is the Personal Voice and Live Speech feature?
Apple will allow iPhone and iPad users to create their own Personal Voice, meaning a replica of their real voice, by reading out loud a set of sample prompts for 15 minutes.
Users can prerecord set phrases such as, "I would like a Black Coffee", on their Apple devices, which can be played during live conversations over Facetime and phone calls.
Photo: Apple newsroom
Apple also integrates Personal Voice with Live Speech allowing users to type what they want to say and have their Personal Voice (replica created by Apple's software) read it out to whoever they are speaking to, on Facetime or calls.
With Personal Voice and Live Speech, Apple is essentially cloning the users' voice to be used during real-time calls to make it seem as if the user is speaking.
These features are aimed towards those with speech limitations such as Amyotrophic lateral sclerosis (ALS) or those who are on the verge of losing their ability to speak or have lost their ability to speak.
Advertisement
At the end of the day, the most important thing is being able to communicate with friends and family.
- Philip Green, ALS advocate
Additionally, Apple also announced a slew of features for those who are blind or have low vision. Point and Speak in the Magnifier mode will allow users to point at an object and have the text read out loud.
What about privacy and safety?
When it comes to technological advancements such as ChatGPT, AI assistance, and voice or image cloning, there is always a question of safety and privacy. And Apple's voice feature is no different.
According to reports, artificial intelligence has been used to clone the voice of family members to scam people, especially the elderly or those with limited knowledge of technology.
We have most often come across video deepfakes, but audio deepfakes are also coming up.
Imagine, you get a call from an unknown number with your close friend or sibling's voice asking you for an urgent transfer of money into an unknown account because the person is in danger. Urgency is created so that the person being scammed isn't able to cross-check the number or account.
Apple has said that their Personal Voice and Live Speech will use on-device machine learning to ensure privacy.
These updates draw on advances in hardware and software, include on-device machine learning to ensure user privacy and expand on Apple's long-standing commitment to making products for everyone.
- Apple
Apple says the features will be rolled out later this year. The announcement comes ahead of iOS 17's release.