Using facial recognition technology to study people's moods and identify identifying characteristics is taking Microsoft a step too far. The company is concerned about privacy and misuse of this technology. Therefore, the U.S. hardware and software company will immediately stop using facial recognition technology for these purposes.

So writes Microsoft in a blog (1).
Microsoft knows that facial recognition brings benefits to law enforcement and investigative agencies. At the same time, the company recognizes that there are also dangers and risks involved. To deal responsibly with this technology, Microsoft has laid out a set of ground rules in an internal document. This lists various security and precautionary measures so that the technology is used wisely and thoughtfully.
Microsoft regularly updates these guidelines. Employees recently reviewed the rules and tightened the requirements even further. For example, new customers must now apply for access to use facial recognition technology in Azure Face API, Computer Vision and Video Indexer. Existing customers have one year to obtain this approval. As of June 2023, they will no longer be allowed to use Microsoft's facial recognition technology if the application is not approved.
"By introducing limited access, we are adding an additional layer of protection to the use and deployment of facial recognition to ensure that the use of these services is consistent with Microsoft's Responsible AI Standard and contributes to high-value end users and societal benefits," the tech giant wrote in its blog.
In addition to stricter admissions policies, Microsoft is also imposing a restriction on itself. Facial recognition technology used to identify people's emotional states and identity traits is being restricted effective immediately. This technology makes it possible to tell whether someone is happy, angry or sad. It can also be used to identify personal characteristics such as gender, makeup use, facial hair and the presence or absence of tattoos.
Microsoft believes such applications are a bridge too far at this time. "We have been working with internal and external researchers to understand the limitations and potential benefits of this technology and the tradeoffs," the company writes. This shows that experts are concerned about our privacy.
Also, they think it is impossible to demonstrate a link between facial expression and demographic data. Finally, there is the fear that access to such privacy-sensitive data could be misused for such purposes as discrimination and excluding people from services.
Effective immediately, new customers cannot use facial recognition technology to map people's moods and personal attributes. Existing customers have until June 30, 2023, to stop using these attributes.
The deployment of facial recognition technology has been the subject of heated debate in the U.S. for some time. In June, IBM chief Arvid Krishna announced he would stop researching, developing and offering products and services with facial recognition technology. He feared the technology could well be used for mass surveillance, ethnic profiling and the promotion of discrimination and racial inequality.
For the same reasons, Amazon and Microsoft also stopped selling facial recognition technology at the time. They are waiting for politicians in Washington to introduce legislation that would clarify the uses.
Here in Europe, too, there is fierce debate about the use of smart cameras with facial recognition technology. The European Data Protection Board (EDPB) sees this as a step toward the establishment of a mass surveillance society. "A society where you never walk the streets unconstrained anymore. In which you cannot escape the piercing eyes of systems that are watching you and recognizing you, so can follow you around all day. You step out the door and know: I am constantly being watched. That is oppressive, makes people feel less free to be themselves," explained Aleid Wolfsen, deputy chairman of the EDPB.
To ensure that the use of facial recognition is necessary and proportionate, the European regulator in May launched new guidelines. If it were up to the EDPB, facial recognition in public spaces would be banned. The same applies to technology that classifies people based on ethnicity, gender, sexuality or political affiliation. Building up databases of photos collected from public sources is also a no go.
Finally, the regulator calls for a ban on technology that recognizes emotions. This technology, it says, has only drawbacks. "Think of employers who give employees a bad review if they look 'angry' too often while sitting at their computer, employees who have to smile to gain access to the workplace, or police who, as a precaution, stop people who, according to the system, are 'angry' or 'dissatisfied' while walking down the street," Wolfsen said.
