Automation Bias in the Abstraction Age

As I dug into the meaning behind the term automation bias, I became more horrified. The benign definition is as follows: “Automation bias is the propensity for humans to favor suggestions from automated decision-making systems and to ignore contradictory information made without automation, even if it is correct.” (1)

Horror #1 because this implies we are now in an age where we trust the machine more than we trust instinct and fact. We ignore gut feelings and certainties because we think the machine knows better.  And as James Bridle most recently put it, “Automation bias means that technology doesn’t even have to malfunction for it to be a threat to our lives” (2)

Horror #2 because this concept is easily abstracted out of our day-to-day lives and its social impact.   I have written about the age of abstraction and the dangers of putting complex things in a black box. Black boxes are scary because the info they house are viewable to a select audience. (control)

Horror #3 because when you couple automation bias with black box abstraction, the path to the final source of truth is obfuscated. No single person has a full view. it’s unclear if we would understand everything even if we could see all the parts anyway (oh joy) — but being abstracted out of both logic and code erases all possibility. 

We need to be in the era of certainty and clarity not doubt and abstraction. 

5G

The Need For Speed : 5G

Struggling to make a list of apps or utilities on my phone where 5G becomes a game changer.

Yes streaming could be better — but it’s not unusable, and with the collapse of net neutrality who knows what true though-put will happen if you are on a competitive network .

Does my email need to be faster? do texts or maps or any of the basic utilities? What exactly are the immersive experiences?

How essential is any of this past the need for better coverage and signal. Best I can tell for the first few years 5G will make your phone work like you expected it to.

At this point it’s unclear with 5G, one minute it’s a national threat– the next– a negotiation tactic. It’s not one but two attack vectors and it hasn’t been deployed. NYT: Russia is attacking 5G with health disinformation campaigns – Your 5G Phone Won’t Hurt You. But Russia Wants You to Think Otherwise.


manipulated media

A significant ‘manipulated media’ incident yesterday, With two separate doctored videos circulating, both focused on the mental health of the officers in the highest positions in the country.  

Manipulated Media Swarms

This narrative is not new it also took place in the last election cycle. This post is not about politics it’s about the use of swarm behaviors on social media driven by manipulated media produced to drive men to action.

Legacy Media Reinforcement

First slowed-down footage made to make The Speaker of the House look drunk clocked millions of views on Facebook. Then a doctored [change the content or appearance of in order to deceive; falsify.] clip of stumbles also towards the ends of illustrating mental instability was RT by the POTUS then verbally followed up at an media event .

Never Goes Away

Warnings about being manipulated by deep fake video footage is prevalent in the news.  While that my be a danger a much lower tech of altered, edited and ultimately doctored video that spreads on social media is working just fine.

These digital swarms targeted with harms before content is flagged, identified, qualified and more often than not — never ‘taken down’.  In a rare case of a take down the material never disappears it only gets renamed and populates again to other accounts and networks.

Cheap Fakes Big Distraction

Threat scenarios around deep-fake videos speak of the perfect combo of early generation mobile handsets, poor picture quality due to bandwidth in 3rd world countries as the example environment for manipulated videos to spread.

Yesterday here in the US doctored, edited slowed-down footage playing out on iPhones and the web with an audience of ripe receptors worked just fine. The term ‘cheap fake’ is also what we are falling for and talking about vs issues at hand. Look over here.

Digital Literacy.

Our collective digital literacy is an all time low- dispute the text, question the camera, deny the video. All video networks need to label manipulated media ASAP. There are unique qualities to un-edited, edited, and manipulated we need to unpack them and the language around this media better.


Diversity in Design + AI

Series: Data Ethics and Diversity

Intersecting issues of data ethics (privacy, etc) and diversity.

Trans-inclusive Design [alistapart.com ] issues touching on content, images, forms, databases, IA, privacy, and AI—just enough to get you thinking about the decisions you make every day and some specific ideas to get you started.

Diversity & Inclusion Resources [aiga] There’s a lot of information about Diversity & Inclusion out there. We’ve compiled it in one place, so that whether you’re an AIGA chapter leader or a designer looking to learn more, you can start with a slew of great resources all in one place.

diversity.ai Preventing racial, age, gender, disability and other discrimination by humans and A.I. using the latest advances in Artificial Intelligence

Google’s the People + AI Guidebook This Guidebook will help you build human-centered AI products. It’ll enable you to avoid common mistakes, design excellent experiences, and focus on people as you build AI-driven applications. It was written for user experience (UX) professionals and product managers as a way to help create a human-centered approach to AI on their product teams. However, this Guidebook should be useful to anyone in any role wanting to build AI products in a more human-centered way.

In The Papers

Computer Vision and Pattern Recognition

For me one of the fastest ways of learning is through my eyes.

arXiv:1905.01817 [pdf, other] Extracting human emotions at different places based on facial expressions and spatial clustering analysis

arXiv:1905.01920 [pdf, other] FaceShapeGene: A Disentangled Shape Representation for Flexible Face Image Editing

A favorite this week – pattern recog. for fashion recommendations…

arXiv:1905.03703 [pdf, other] Learning fashion compatibility across apparel categories for outfit recommendation

Data Ethics and Diversity Practices

Every organization needs to have strategies in place around data ethics — the impact of what is collected, how, retention, use and ownership. The sister strategy to this is proper process that makes inclusion and diversity a practice as a part of everything from product design, operations, management culture, core values, …everything.

To paraphrase an old AT&T advert — If you are not — ‘you will’

The implications for every organization critical. Executed wrong it can cost billions in the data department alone. Privacy is finally something people are talking about. Its opaqueness and abstraction being peeled away and people are horrified about unethical practices, lack of disclosure and all the associated  implications. See- social media any day of the week.

If you are launching any product and have not checked it against being culturally or morally tone-deaf , designed with diversity, and data ethics in mind — you are doing it wrong.

This is an area of focus of mine both as an investment thesis and providing strategies for success. Data ethics is a business advantage with both profit and cost implications. Diversity in everything you do – the right way with equall. The intersection of the two is a partnership for trust.  

Mediaeater Reading List 2019

Nonhuman Photography
Zylizska, Joanna
The O. Henry Prize Stories 2018 (The O. Henry Prize Collection)
Furman, Laura (editor)
The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power
Zuboff, Shoshana
The Order of Time
Rovelli, Carlo
How To Build A Time Machine 
Davies, Paul
Memories of the Future
Hustvedt, Siri
Travels in Four Dimensions: The Enigmas of Space and Time 
Le Poidevin, Robin
The Spirit of Science Fiction: A Novel
Bolaño, Roberto 
Kerry James Marshall: Inside Out
Marshall, Kerry James
Fox 8
Saunders, George
Antwerp
Bolaño, Roberto 
The Parade
Eggers, Dave
Machines Like Me
McEwen, Ian
The Falconer 
Czapnik , Dana
Delta – V
Suarez, Daniel

Facial recognition technology (FRT) Roundup 

Todays Daily Dish focus is Facial recognition technology (FRT)

The collection of stories links and info below show just how much the public + private sector, scientific leaders, industry and media are all calling for accountability around FRT.

The only ones not speaking up our lawmakers. This is a critical time to ignore the embed first seek permission later rollout of FRT.

“Facial Recognition is the Plutonium of AI:”  (PDF) Facial recognition’s radicalizing effects are so potentially toxic to our lives as social beings that its widespread use doesn’t outweigh the risks.

MTA’s Initial Foray Into Facial Recognition at High Speed Is a Bust [WSJ]  Zero face were detected within guidelines

Privacy in 2034: A corporation owns your DNA (and maybe your body)   [fastcompany]

NYPD claws back documents on facial recognition it accidentally disclosed to privacy researchers [DailyNews] —LAPD drops program that sought to predict crime amid bias accusations ——- Axon looking to add facial recognition to its body cams

Global Facial Recognition Market EST to be 7.76 Billion USD by 2022

Lets not forget who is driving the append of off-line information, (FRT/LBS) with our online lives. —- To wit….Publicis to buy US digital marketing company Epsilon, which collects vast amounts of consumer data like transactions, location, and web activity, for $3.95B

Amazon shareholders have forced a vote on the companies deployment of FRT – No suprise The Board Recommends That You Vote “Against” This Proposal (pdf) requesting Item 6—Shareholder Proposal Requesting A Ban On Government Use Of Certain Technologies and refers to their AWS

Big Brother at the Mall [WSJ] The privacy debate moves beyond e-commerce as magic mirrors and beacons log shoppers’ data in bricks-and-mortar stores.  

China / AI / FRT

Of the 11 artificial intelligence startups, the two most well-funded companies, SenseTime ($1,630M) and Face++ ($608M), are both from China and focuses on facial recognition  —- Related –  Multiple surveillance systems using @YITUTech Facial Recognition Technology which were accessible to the internet without any form of authentication full with millions of recorded faces stored in MongoDB databases and indexed  Yes that’s the same FRT a certiain pop star used to on her audience.   One Month, 500,000 Face Scans: How China Is Using A.I. to Profile a Minority [NYT]  In a major ethical leap for the tech world, Chinese start-ups have built algorithms that the government uses to track members of a largely Muslim minority group.

One of the best sources of China AI information is this newsletter –  A breakout paragraph from a recent issue around FRT and China —- Notably, the reporter also writes, “even if the public security can get our ‘location information based on the cameras we have passed in the past 24 hours,’ there is some controversy over whether the public security system has the right to monitor the life trajectory of each of us, and what places we have passed each day; compared with identity information, which is information necessary to maintain law and order, and there is constant need to register (the identity information). But the monitoring of the former (real-time location in the past 24 hours) is very likely to violate our privacy.” PLEASE STOP with the notion that Chinese people don’t care about privacy.


/Links

NYT The Privacy Project

Tracking Phones, Google Is a Dragnet for the Police (nyt) Google’s Sensorvault Is a Boon for Law Enforcement. This Is How It Works. (NYT)

The Hidden Horror of Hudson Yards Is How It Was Financed
Manhattan’s new luxury mega-project was partially bankrolled by an investor visa program called EB-5, which was meant to help poverty-stricken areas. This map makes me sick

A.I. Is Changing Insurance Sarah Jeong. [NYT OP-ED]

How the Anonymous Artist Banksy Authenticates His or Her Work

Pete For America – Design Toolkit  Excellent example of the parts required for a grassroot capaign

How to Win Friends and Influence Algorithms [wsj] From YouTube to Instagram, what you see in your feeds isn’t really up to you—it’s all chosen by invisible, inscrutable bots. Here’s how to take back at least some control.

Urban Data + Sidewalk Labs

Everyone has a plan until they get punched in the face.

I am a staunch advocate of privacy because of its disproportionate intrusions into poor and minority communities. Sidewalk labs (an Alphabet co] had long been on my radar for their hostile LinkNYC kiosks. Those data collection devices, advertising surface and machine learning ingestion points are deployed across NYC sans any public dialog.

Why wasn’t a democratic process put in place to understand the benefits of all their ‘urban-data’ collection?. A clear and tacit public understanding of ‘urban data’ collected and the associated value exchange is needed.

This week I watched stunned as Sidewalk Labs testified in Canada trying to defend their process in front of The House Ethics Standing Committee on Access to Information, Privacy and Ethics. It did not go well.

Misunderstood or Second-Order Thinking Failure?

The more I learn about Sidewalk Labs, the more I am completely puzzled by the massive missteps in rolling out their key offerings.

Surprisingly when digging into Sidewalk Labs’ vision reimagining cities to improve quality of life. Large scale data-driven smart cites, to raincoats for buildings, It’s not all evil empire. There is much to like of their views, framework and offering.

Dare I say – Sidewalk Labs may be misunderstood, most of it stemming from self inflicted wounds.

That said, there is no excuse for the tactical data directives or the lack of any kind of transparency playbook. If there was a plan, no one in the cities they are approaching or live in knows it. This is by design.

“For Alphabet, the project presents a chance to experiment with new ways to use technology — and data — in the real world. “This is not some random activity from our perspective. This is the culmination of almost 10 years of thinking about how technology could improve people’s lives,” said Mr Schmidt.

FT – Eric Schmidt

Ten years of thinking.

Let’s not go over all the tactical fails or massive strategic blunders. Let us instead focus on the single issue that every city in this nation needs to solve for right now.

The Hubris

Here in the US we are just beginning to understand our vulnerabilities around digital and social networks. The impact of psychological and behavioral targeting taking place needs to be understood and to what consequence.

This graph from the Toronto Star sums up the nations consciousness nicely.

It doesn’t take long before the idea of sensors tracking every move of every adult and child who lives, works or even passes through the district starts to sound ominous. Especially in an era when data collected for one purpose by one entity is routinely repurposed for an entirely different use, and the people at the centre of all that data are often completely unaware of what’s being done and to what end.

Toronto Star

Sidewalk Labs rolls out anyway and targets new cities that are now revolting against their dystopian secret offerings. Why would they launch before answering critical questions around the above aptly described dystopian future?.

It is because they have arrogantly decided for everyone what is acceptable ‘urban data’ for them to collect and use. We still do not know what that is.

At the launch of Hudson Yards aka Surveillance City, this quote stood out because it implies the use of facial recognition and emotion detection software.

We can say how many people looked at this ad, for how long. Did they seem interested, bored, were they smiling?” he said.

Related Hudson Yards president Jay Cross (Credit: University of Toronto)

Silicon Valley’s technology vision for cities – technology can make our lives better – Sidewalk Labs wants to be who we trust “to improve quality of life.” but their failure to engage the citizens they want to service is strategy that’s turning into a dance of thousand cuts.

Who Owns Urban Data?

The surveillance economics taking place are: Sidewalk Labs is harvesting our life events (aka – ‘urban data’) through behavioral analytics. That data is an asset class that becomes occurring revenue to benefit Alphabet /Sidewalk Labs shareholders – not the citizens of the city.

Questions around ‘urban data’ every city needs to define right now:

  • What defines public urban data ?
  • Do municipalities need to hand over control to private companies, why?
  • What demographic process took place to define this?
  • What urban data is now being considered personal at initiation? (are peoples gait, face, shape all considered fair game by entering the public space? we defined that, does it comply with the law)
  • What is the imperative to collect it?
  • Who are the deciders governance?
  • Who owns it and has access to it?
  • Who regulates it?
  • How can urban data be kept separate from online data?


The questions not being asked are even more important. Data collection points have been weaponized and the public is unaware. Sidewalk Labs role here should have been public utility not government/private spy org.

Any talk of governance of data needs to account for machine learning and AI capabilities. You don’t need to save data to derive value from it.

Democratic Process + Discussion

What where they thinking not setting these critical issues for public discussion?. Sidewalk Labs collect first, ask questions later is a mirror held up to the men at the table defending the position.

Whats Next

A legal injunction needs to be put in place stat to stop the deployment and collection of urban data. This process needs to be re-started.

Done correctly everyone could benefit. Right now Sidewalk Labs are setting themselves up for a potential fall in Canada, a real hatred for their NYC kiosks and future legal ramifications for their product.

It could have been a block party.

Related reading:

How China Turned A City Into A Prison [nyt]

A.I. Experts Question Amazon’s Facial-Recognition Technology [NYT] At least 25 prominent artificial-intelligence researchers, including experts at Google, Facebook, Microsoft and a recent winner of the prestigious Turing Award, have signed a letter calling on Amazon to stop selling its facial-recognition technology to law enforcement agencies because it is biased against women and people of color.