-as of [21 NOVEMBER 2024]-
.
In December 2016, Uber launched Uber AI, a division for researching AI technologies and machine learning
Uber AI created multiple open source projects, such as Pyro, Ludwig, and Plato.
Uber AI also developed new AI techniques and algorithms, such as the POET algorithm and their sequence of papers on neuro-evolution.
Uber AI was shut down in May 2020 in order to refocus on Uber’s core operations in an effort to recover financial losses dealt by the ‘COVID-19 pandemic’
.
In April 2021, the court of Amsterdam ruled that Uber has to reinstate and pay compensation to six drivers that were allegedly automatically terminated solely due to algorithms, which is in violation of Article 22 of GDPR, which relates to automated decisions causing “legal or significant impact”.
Uber challenged the ruling, claiming it was not aware of the case and that the judgement was brought by default without the company ever being notified
however, the decision was upheld
.
A UK labor union is taking Uber to court after one of its members was fired for failing two of Uber’s facial recognition checks.
The UK-based Independent Workers’ Union (IWGB) announced Wednesday that it is taking Uber to court over its facial recognition algorithm, which it claims discriminates against people of color.
The labor union’s legal complaint, which was shared with Motherboard, specifically concerns Uber’s “Real Time ID Check,” a facial recognition tool that periodically has drivers submit selfies through the Uber app to verify their identity.
The IWGB filed the complaint on behalf of one of its members who was fired in April after failing two consecutive ID checks.
“After submitting his photograph through the App, the Claimant received a message from Uber stating that he had failed to verify his identity and that his account had been waitlisted for 24 hours,” the complaint reads.
“On 14 April 2021 the Claimant was informed by Uber that his account had been deactivated after the second attempt at verification.”
“Before the decision was taken, the claimant was never offered a human facial recognition check,” it added.
After the decision, the driver allegedly went to Uber’s head office in London to challenge his deactivation.
According to the complaint, an Uber staff member confirmed that “the real-time photograph was a match with the profile picture of the Claimant that Uber had on file,” but told the driver “he was not able to do anything about this.”
“This is just one example of [Black, Asian, and Minority Ethnic] BAME drivers being terminated without due process,” Nader Awaad, chair of the IWGB’s driver branch, told Motherboard in a phone call.
“A few weeks ago I sent a letter to Uber asking them to sit down at the table with us and talk. It was an attempt to extend an olive branch, but I never received a reply.”
Nader claimed he knew more than a hundred drivers who have had issues with Uber’s ID check. In March, 14 BAME UberEats couriers told Wired that their accounts were frozen and in some cases terminated because they failed an ID check.
Uber’s real time ID check relies on Microsoft’s Face API facial recognition tool. In its complaint, the IWGB cites a 2019 MIT study which found that the Face API is five times more likely to make an error when identifying a darker-skinned person compared to a lighter-skinned person.
Researchers have also found that all major facial recognition systems demonstrate similar bias against people with darker skin.
Uber did not respond to a request for comment at the time of publication.
In coordination with the lawsuit, the IWGB announced a strike action outside Uber’s London headquarters backed by Black Lives Matter UK.
It also called on consumers to boycott the company for 24 hours
ORIGINAL REPORTING ON EVERYTHING THAT MATTERS IN YOUR INBOX.
By signing up, you agree to the Terms of Use and Privacy Policy & to receive electronic communications from Vice Media Group, which may include marketing promotions, advertisements and sponsored content.
Getty Images is now using an ‘Enhanced Release’ form, which prompts photo models to consent to the use of biometrics like facial data.
Janus Rose
Biometric data is everywhere thanks to facial recognition technology and an endless supply of selfies on social media. Now, one of the largest stock image sites on the internet is offering a way to sign away rights to biometrics like facial data, with a release form that allows the information to be used by third parties.
Last week, Getty Images announced an “Enhanced Model Release” form which allows biometric information sharing as part of its media licensing platform, which is used heavily by media publications (including VICE). Release forms are a standard practice for media photographers, to show that a subject has consented to their image being used and sold. In addition to granting normal rights for photos, Getty’s new agreement prompts subjects captured in provide consent for licensing or using of their biometric data “for any purpose (except pornographic or defamatory purposes) including marketing or promotion of any product or service.”
“Biometric data is especially valuable because it can be used to recognise and map facial features extracted from visual content,” the company wrote in a press release. “Recently, there have been a spate of lawsuits around the use of biometric information without the explicit consent of people featured in visual imagery. While the law in this area is still evolving, developers should always start with collecting data from legitimate sources and obtaining authorization for its intended use.”
But some legal experts worry Getty’s enhanced model agreement is overly broad, and could lead to photographers, filmmakers, and agencies using biometric data for all kinds of unintended purposes.
“It’s way beyond what someone would need for including someone in a photoshoot or anything like that,” Frederic Jennings, a Brooklyn-based attorney who specializes in privacy and digital rights, told Motherboard. “Between that broad assignment language, and the equally broad waiver on biometric rights and prohibitions, this is a pretty huge rights grab snuck into what should be a simple release.”
Facial recognition and other biometric data are frequently used to train machine learning algorithms — often without the knowledge or consent of their subjects. 10 US states currently have laws protecting against the sale and use of biometric data, but that hasn’t stopped many companies from amassing giant troves of facial recognition templates. The notorious facial recognition company Clearview AI claims to have over 3 billion face images, and was targeted by cease-and-desist actions after it was found scraping peoples’ faces from Twitter, YouTube, and other social platforms.
Getty is hailing the new release form as an “industry first” that it hopes will become a standard practice for licensing images. But on a major platform, the form could also cause models to unnecessarily grant legal rights to their biometric data—including in states where the law would normally protect it by default.
On the other hand, Jenkins argues that it could give people the chance to opt out in states without strong biometric privacy protections.
“In places that do have biometric protections written into law (or places where courts have read those into other laws), people would retain those rights by default, and would be just as well protected by that language being absent & undefined here,” said Jenkins. “But I think it being called out is useful where those rights don’t exist—either by not signing, or signing a version with those sections crossed out, it would be a clear way to show that permission isn’t being granted.”
Researchers are building new ways to track and analyze your every glance—and big tech platforms like Facebook are already looking to make their own.
Janus Rose
With its much-hyped Meta re-brand, Facebook CEO Mark Zuckerberg made crystal clear that the company is going all-in on its vision for virtual social spaces. It’s not the first time a tech mogul has confidently declared this virtual reality renaissance, where people will supposedly inhabit online avatars and spend real-world money on digital furniture.
But this time around, advances in machine learning are promising to give tech companies access to entire categories of extremely intimate data—including biometrics like eye movements that can potentially reveal highly sensitive details about our preferences and mindset.
In a new paper, researchers from Duke University describe a system called EyeSyn that makes analyzing a person’s eye movements easier than ever before. Instead of collecting huge amounts of data directly from human eyes, however, the researchers trained a set of “virtual eyes” that mimic real eye movements. The system is fed templates for typical eye movement patterns—such as reading text, watching a video, or talking to another person—and then learns to match and recognize those patterns in actual humans.
In other words, the system uses example data to guess what a person is doing or looking at based entirely on their eye movements.
According to the researchers, this process removes some of the privacy concerns associated with capturing large amounts of biometric data for training algorithms. Instead of using huge, cloud-based datasets filled with human eye movements, the EyeSyn system is trained to recognize eye patterns from the template models loaded onto a local device. This also makes the system less resource-intensive, so that smaller developers can render virtual environments without huge amounts of computing power.
But the researchers also admit eye tracking can be used to create predictive systems that determine what catches a person’s attention—and potentially, infer deeply private details that they never intended to reveal.
“Where you’re prioritizing your vision says a lot about you as a person,” wrote Maria Gorlatova, one of the study’s authors, in a statement released by Duke University. “It can inadvertently reveal sexual and racial biases, interests that we don’t want others to know about, and information that we may not even know about ourselves.”
A previous study from 2019 goes further, concluding that tracking a person’s gaze “may implicitly contain information about a user’s biometric identity, gender, age, ethnicity, body weight, personality traits, drug consumption habits, emotional state, skills and abilities, fears, interests, and sexual preferences.”
In other types of algorithmic systems like emotion recognition, many machine learning experts are extremely skeptical about the accuracy of these predictions. But that’s likely not going to stop tech companies from deploying them anyway—especially platforms like Facebook, which make money by monitoring and predicting users’ behavior in order to show them ads.
“When it comes to Facebook/Meta they’ve long ago exhausted the assumption of good faith operations, particularly when it comes to privacy,” Dr. Chris Gilliard, a professor at Macomb Community College who studies algorithmic discrimination, told Motherboard. “When I think about Meta’s push to make the ‘metaverse’ a place where people live, work, and play, there are many nefarious and frankly discriminatory ways this is likely to play out.”
The researchers behind EyeSyn are not working with Facebook, and say they’re hoping to open up the technology to smaller companies entering the VR market. Speaking with Motherboard, Gorlatova noted that eye tracking is distinct from other technologies that predict emotions by observing the entire face; some of its oldest uses have been in product testing, psychological studies, and medical applications, for example.
But more recently, tech companies have taken a renewed interest in developing the technology to try and measure things like cognitive activity by observing factors like eye movements, blinking, and pupil dilation.
After it bought virtual reality company Oculus in 2014, Facebook said it had no plans to use biometric and motion sensor data to nudge user behavior or sell ads.
But more recently, Facebook’s parent company Meta was granted several patents related to eye tracking and biometric sensors, and seems intent on using those types of metrics to bolster its ad platform in the Metaverse.
Gorlatova emphasizes that privacy needs to be built into eye tracking technologies from the very start.
Specifically, she says data on eye movements should be processed locally on consumer-end devices, so that sensitive biometric information never makes it into the hands of Facebook or another third party.
“There are many promising techniques in this general space that train classifiers locally, without sending private data to the cloud … or add noise to the data before transmitting it to the cloud so that it does not reveal sensitive information about a specific user” Gorlatova told Motherboard in an email. “I personally think that edge computing is the key to realizing many next-generation applications, including augmented reality specifically.”
A ragtag group of hackers and OSINT professionals is using everything from open flight data to Google Maps to evacuate foreign students from Ukraine.
African people studying in Sumy arriving at the main train station in Lviv, Ukraine on March 9, 2022. Students with Indian, Chinese and different African nationalities have arrived today in Lviv from Sumy, a city located in eastern Ukraine, through a humanitarian corridor created to evacuate them. Image: Salido/Anadolu Agency via Getty Images
Chimee, a 29-year-old Nigerian student, distinctly remembers a painting flying off his dormitory wall and smashing to the ground as explosions rang in the distance. Just five months ago, he had left Nigeria to finish his master’s degree in Sumy, a city in eastern Ukraine—now one of the focal points of the Russian invasion. Friends and family spammed his phone with messages telling him to leave Ukraine, but fighting had already begun and the railway tracks around the city had been damaged.
Still, a few days later, Chimee had once again packed his most important belongings into a suitcase and set his Google maps in the direction of a new country, Poland—this time as a refugee.
Chimee, who asked that his last name not be printed to avoid repercussions when applying for European residency, is one of hundreds of foreign students who’ve been helped by an informal consortium of Open Source Intelligence (OSINT) experts that have used everything from open flight data to geolocation tools to help them evacuate Ukraine. The effort comes as BIPOC students have faced documented discrimination on the road to safety, including not being allowed on trains.
Five hundred kilometers to the north, as Russia began its invasion of Ukraine, Chris Kubecka found herself in a similar situation. The American cybersecurity expert was stationed in Kyiv to help in the event of a large-scale cyberattack on core infrastructure, such as nuclear power plants. Kubecka fled in a van with a haphazard group of people she’d met during the chaos. She hastily managed to get in touch with some friends and colleagues who worked with Open Source Intelligence (OSINT) who helped guide her safely over the Romanian border.
Immediately after arriving, Kubecka decided to put her connections and skills to work to help others in the same way she had been. Before long, she had brought together a ragtag consortium of hackers and OSINT experts to help evacuate people from parts of Ukraine under attack. Of significant concern were the country’s thousands of foreign students—many from the Global South—who have few domestic connections.
Despite its hasty creation, the group estimates it has helped more than 900 foreign students—many who have received only sporadic help from their embassies and Ukrainian authorities—flee the country. That estimate is based on a spreadsheet, which Motherboard has viewed, where the group tracked how many hotel rooms and transportation it booked for students. Among them was Chimee, who is now hunkered down in an undisclosed location in Germany.
Chimee came into contact with Kubecka and the others on March 5, after he and other students had begun posting about their situation on social media. With dwindling access to food and water, videos of foreign students filling up plastic bottles with snow circulated online. As Kubecka and the team worked with the Red Cross on an evacuation plan, they sent the students money to buy scarce food, medicine, and tips on how to hold out for the right moment to flee.
Chimee recalls receiving a Google Doc titled “Survival Guide” put together by Kubecka as well as some students who had already escaped, and a former U.S. special forces operative, among others. The guide, seen by Motherboard, includes advice on what to do when caught in a crossfire (“if there is a lull in firing, attempt to improve your cover”) and shelling (“cover your ears and keep your mouth open to reduce the effect of blast pressure”).
On March 6, Chimee took his chance and approached a Ukrainian family parked next to the railway station. The family, which Chimee described as “incredibly kind,” drove him to a nearby city. Along his journey he received a continuously updated guide from the OSINT group showing, for example, routes blocked because of fighting or demolished bridges, as well as ephemeral humanitarian corridors. When he made it to a railway station, he took a stretch of trains and multi-kilometer long walks before finally making it to the Polish border, and to accommodation that Kubecka had arranged for him and other students. He remembered tears welling in his eyes when he finally realized he’d made it out alive.
“Making sure that these people have access to reliable and real-time intelligence and information can make the difference between life and death,” said Kubecka.
A few days later, Kubecka contacted the driver who drove her out of Ukraine. He owned a transport company and managed to get her in touch with other drivers in Ukraine willing to take the students. The students—mostly from the Global South, including Nigeria and India—had all studied at Sumy University. With relatively low tuition fees compared to its European neighbors, Ukraine is a common destination for non-EU students.
Working with a group of OSINT veterans, including former Bellingcat researcher Nico “Dutch OSINT Guy” Dekens, the group used geolocation tools to track the students’ movements in real-time and route them around Russian air-raids and ground troop activity. They sent them a constant stream of texts informing them to avoid certain roads and crossings, and pinpointed nearby shops and warehouses where they may be able to find food and shelter. Once the students reached a safe location, the team arranged for buses to pick them up and transport them further west toward safe countries.
“I have been using a mix of manual and (semi)automated tools to monitor a radius around Chris and the students. With monitoring I mean looking for Russian Military ground and/or air activity within a specific radius of the groups at that time, live and last known location in Ukraine,” Dekens wrote in an email to Motherboard.
“These tools and techniques give insight into various media and social media activity that is live and (near)real-time geolocated ‘eye witness’ reports that gave me the insight where it was safe or not safe for the group to move to,” he added.
Many of the evacuated students are currently in Poland and Germany, and are looking to continue their studies at European universities, if they agree to take them. A few others have decided to return home to Africa and India.
One of the largest challenges has been keeping track of where foreign students actually are, so they can be guided around potential perils. Kubecka has been working with Google on creating a secure way for students to transmit GPS data which can then be overlaid with OSINT-data on Google Maps to establish safe evacuation routes. In order for this to work, more coordination with embassies is vital, she emphasized.
After helping to evacuate students from Sumy, Kubecka and the group say they are now working to help 74 foreign students trapped in Kherson, a city in southern Ukraine occupied by Russian forces. So far, the group has not been successful in trying to evacuate students from cities surrounded or occupied by Russian forces—finding the opportunity to leave their homes to find basic necessities is already difficult.
With dwindling access to food and water and stuck in freezing temperatures, some of the 74 in Kherson students have fallen ill—some with Covid symptoms, said Kubecka.
“We sent them information on how to use charcoal, burnt material, and pebbles to filter the water from melted ice after hearing that they’d just been drinking it—that can get you really sick,” Kubecka said on a Signal call. “It’s gross, but I’ve even advised them to drink water from the toilet tank if that’s all they can find.”
“The hardest advice I’ve had to give, especially to students who are starving and thirsty, is just to lay low and stay put,” she added.
DALL-E can generate images from a few key words—with predictably racist and sexist results.
Janus Rose
To the casual observer, DALL-E is Silicon Valley’s latest miraculous AI creation—a machine learning system that allows anyone to generate almost any image just by typing a short description into a text box. From just a few descriptive words, the system can conjure up an image of cats playing chess, or a teapot that looks like an avocado.
It’s an impressive trick using the latest advances in natural language processing, or NLP, which involves teaching algorithmic systems how to parse and respond to human language—often with creepily realistic results. Named after both surrealist painter Salvador Dalí and the lovable Pixar robot WALL-E, DALL-E was created by research lab OpenAI, which is well-known in the field for creating the groundbreaking NLP systems GPT-2 and GPT-3.
But just like those previous experiments, DALL-E suffers from the same racist and sexist bias AI ethicists have been warning about for years.
Machine learning systems almost universally exhibit bias against women and people of color, and DALL-E is no different. In the project’s documentation on GitHub, OpenAI admits that “models like DALL·E 2 could be used to generate a wide range of deceptive and otherwise harmful content” and that the system “inherits various biases from its training data, and its outputs sometimes reinforce societal stereotypes.” The documentation comes with a content warning that states “this document may contain visual and written content that some may find disturbing or offensive, including content that is sexual, hateful, or violent in nature, as well as that which depicts or refers to stereotypes.”
It also says that the use of DALL-E “has the potential to harm individuals and groups by reinforcing stereotypes, erasing or denigrating them, providing them with disparately low quality performance, or by subjecting them to indignity. These behaviors reflect biases present in DALL-E 2 training data and the way in which the model is trained.”
The examples of this from DALL-E’s preview code are pretty bad. For instance, including search terms like “CEO” exclusively generates images of white-passing men in business suits, while using the word “nurse” or “personal assistant” prompts the system to create images of women. The researchers also warn the system could be used for disinformation and harassment, for example by generating deepfakes or doctored images of news events.
Screen Shot 2022-04-12 at 12.51.20 PM.png
Screen Shot 2022-04-12 at 2.18.18 PM.png
Screenshots of results from the DALL-E system, which generates images from text. Image: OpenAI
In a statement emailed to Motherboard, an OpenAI spokesperson wrote that the researchers had implemented safeguards for the DALL-E system, and noted that the preview code is currently only available to a select number of trusted users who have agreed to its content policy.
“In developing this research release of DALL-E, our team built in mitigations to prevent harmful outputs, curating the pretraining data, developing filters and implementing both human- and automated monitoring of generated images,” the spokesperson wrote. “Moving forward, we’re working to measure how our models might pick up biases in the training data and explore how tools like fine-tuning and our Alignment techniques may be able to help address particular biases, among other areas of research in this space.”
Some AI experts say that the core of this problem is not a lack of mitigations, but the increasing use of large language models (LLMs), a type of AI template that includes hundreds of billions of parameters, allowing engineers to teach machine learning systems to perform a variety of tasks with relatively little training. AI researchers have criticized large models like GPT-3 for producing horrifying results that reinforce racist and sexist stereotypes, arguing that the massive nature of these models is inherently risky and makes auditing the systems virtually impossible. Before being fired from Google, AI ethicist Timnit Gebru co-authored a paper which warned of the dangers of LLMs, specifically noting their ability to harm marginalized groups.
OpenAI offers no solutions to these issues, saying that it is in the early stages of examining bias in the DALL-E system and that its risk analysis should be regarded as preliminary.
“We are sharing these findings in order to enable broader understanding of image generation and modification technology and some of the associated risks, and to provide additional context for users of the DALL·E 2 Preview,” the researchers write. “Without sufficient guardrails, models like DALL·E 2 could be used to generate a wide range of deceptive and otherwise harmful content, and could affect how people perceive the authenticity of content more generally.”
This article has been updated with a statement from OpenAI.
www.vice.com /en/article/xgxnm3/uber-is-being-sued-over-its-racist-facial-recognition-algorithm
Uber Is Being Sued Over Its ‘Racist’ Facial Recognition Algorithm
23-29 minutes
.
.
*👨🔬🕵️♀️🙇♀️*SKETCHES*🙇♂️👩🔬🕵️♂️*
.
.
👈👈👈☜*WORKING FOR ‘UBER EATS* ☞ 👉👉👉
.
.
💕💝💖💓🖤💙🖤💙🖤💙🖤❤️💚💛🧡❣️💞💔💘❣️🧡💛💚❤️🖤💜🖤💙🖤💙🖤💗💖💝💘
.
.
*🌈✨ *TABLE OF CONTENTS* ✨🌷*
.
.
🔥🔥🔥🔥🔥🔥*we won the war* 🔥🔥🔥🔥🔥🔥