Diversity in Design + AI

Series: Data Ethics and Diversity

Intersecting issues of data ethics (privacy, etc) and diversity.

Trans-inclusive Design [alistapart.com ] issues touching on content, images, forms, databases, IA, privacy, and AI—just enough to get you thinking about the decisions you make every day and some specific ideas to get you started.

Diversity & Inclusion Resources [aiga] There’s a lot of information about Diversity & Inclusion out there. We’ve compiled it in one place, so that whether you’re an AIGA chapter leader or a designer looking to learn more, you can start with a slew of great resources all in one place.

diversity.ai Preventing racial, age, gender, disability and other discrimination by humans and A.I. using the latest advances in Artificial Intelligence

Google’s the People + AI Guidebook This Guidebook will help you build human-centered AI products. It’ll enable you to avoid common mistakes, design excellent experiences, and focus on people as you build AI-driven applications. It was written for user experience (UX) professionals and product managers as a way to help create a human-centered approach to AI on their product teams. However, this Guidebook should be useful to anyone in any role wanting to build AI products in a more human-centered way.

In The Papers

Computer Vision and Pattern Recognition

For me one of the fastest ways of learning is through my eyes.

arXiv:1905.01817 [pdf, other] Extracting human emotions at different places based on facial expressions and spatial clustering analysis

arXiv:1905.01920 [pdf, other] FaceShapeGene: A Disentangled Shape Representation for Flexible Face Image Editing

A favorite this week – pattern recog. for fashion recommendations…

arXiv:1905.03703 [pdf, other] Learning fashion compatibility across apparel categories for outfit recommendation

Data Ethics and Diversity Practices

Every organization needs to have strategies in place around data ethics — the impact of what is collected, how, retention, use and ownership. The sister strategy to this is proper process that makes inclusion and diversity a practice as a part of everything from product design, operations, management culture, core values, …everything.

To paraphrase an old AT&T advert — If you are not — ‘you will’

The implications for every organization critical. Executed wrong it can cost billions in the data department alone. Privacy is finally something people are talking about. Its opaqueness and abstraction being peeled away and people are horrified about unethical practices, lack of disclosure and all the associated  implications. See- social media any day of the week.

If you are launching any product and have not checked it against being culturally or morally tone-deaf , designed with diversity, and data ethics in mind — you are doing it wrong.

This is an area of focus of mine both as an investment thesis and providing strategies for success. Data ethics is a business advantage with both profit and cost implications. Diversity in everything you do – the right way with equall. The intersection of the two is a partnership for trust.  

Mediaeater Reading List 2019

Nonhuman Photography
Zylizska, Joanna
The O. Henry Prize Stories 2018 (The O. Henry Prize Collection)
Furman, Laura (editor)
The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power
Zuboff, Shoshana
The Order of Time
Rovelli, Carlo
How To Build A Time Machine 
Davies, Paul
Memories of the Future
Hustvedt, Siri
Travels in Four Dimensions: The Enigmas of Space and Time 
Le Poidevin, Robin
The Spirit of Science Fiction: A Novel
Bolaño, Roberto 
Kerry James Marshall: Inside Out
Marshall, Kerry James
Fox 8
Saunders, George
Antwerp
Bolaño, Roberto 
The Parade
Eggers, Dave
Machines Like Me
McEwen, Ian
The Falconer 
Czapnik , Dana
Delta – V
Suarez, Daniel

Facial recognition technology (FRT) Roundup 

Todays Daily Dish focus is Facial recognition technology (FRT)

The collection of stories links and info below show just how much the public + private sector, scientific leaders, industry and media are all calling for accountability around FRT.

The only ones not speaking up our lawmakers. This is a critical time to ignore the embed first seek permission later rollout of FRT.

“Facial Recognition is the Plutonium of AI:”  (PDF) Facial recognition’s radicalizing effects are so potentially toxic to our lives as social beings that its widespread use doesn’t outweigh the risks.

MTA’s Initial Foray Into Facial Recognition at High Speed Is a Bust [WSJ]  Zero face were detected within guidelines

Privacy in 2034: A corporation owns your DNA (and maybe your body)   [fastcompany]

NYPD claws back documents on facial recognition it accidentally disclosed to privacy researchers [DailyNews] —LAPD drops program that sought to predict crime amid bias accusations ——- Axon looking to add facial recognition to its body cams

Global Facial Recognition Market EST to be 7.76 Billion USD by 2022

Lets not forget who is driving the append of off-line information, (FRT/LBS) with our online lives. —- To wit….Publicis to buy US digital marketing company Epsilon, which collects vast amounts of consumer data like transactions, location, and web activity, for $3.95B

Amazon shareholders have forced a vote on the companies deployment of FRT – No suprise The Board Recommends That You Vote “Against” This Proposal (pdf) requesting Item 6—Shareholder Proposal Requesting A Ban On Government Use Of Certain Technologies and refers to their AWS

Big Brother at the Mall [WSJ] The privacy debate moves beyond e-commerce as magic mirrors and beacons log shoppers’ data in bricks-and-mortar stores.  

China / AI / FRT

Of the 11 artificial intelligence startups, the two most well-funded companies, SenseTime ($1,630M) and Face++ ($608M), are both from China and focuses on facial recognition  —- Related –  Multiple surveillance systems using @YITUTech Facial Recognition Technology which were accessible to the internet without any form of authentication full with millions of recorded faces stored in MongoDB databases and indexed  Yes that’s the same FRT a certiain pop star used to on her audience.   One Month, 500,000 Face Scans: How China Is Using A.I. to Profile a Minority [NYT]  In a major ethical leap for the tech world, Chinese start-ups have built algorithms that the government uses to track members of a largely Muslim minority group.

One of the best sources of China AI information is this newsletter –  A breakout paragraph from a recent issue around FRT and China —- Notably, the reporter also writes, “even if the public security can get our ‘location information based on the cameras we have passed in the past 24 hours,’ there is some controversy over whether the public security system has the right to monitor the life trajectory of each of us, and what places we have passed each day; compared with identity information, which is information necessary to maintain law and order, and there is constant need to register (the identity information). But the monitoring of the former (real-time location in the past 24 hours) is very likely to violate our privacy.” PLEASE STOP with the notion that Chinese people don’t care about privacy.


/Links

NYT The Privacy Project

Tracking Phones, Google Is a Dragnet for the Police (nyt) Google’s Sensorvault Is a Boon for Law Enforcement. This Is How It Works. (NYT)

The Hidden Horror of Hudson Yards Is How It Was Financed
Manhattan’s new luxury mega-project was partially bankrolled by an investor visa program called EB-5, which was meant to help poverty-stricken areas. This map makes me sick

A.I. Is Changing Insurance Sarah Jeong. [NYT OP-ED]

How the Anonymous Artist Banksy Authenticates His or Her Work

Pete For America – Design Toolkit  Excellent example of the parts required for a grassroot capaign

How to Win Friends and Influence Algorithms [wsj] From YouTube to Instagram, what you see in your feeds isn’t really up to you—it’s all chosen by invisible, inscrutable bots. Here’s how to take back at least some control.

Urban Data + Sidewalk Labs

Everyone has a plan until they get punched in the face.

I am a staunch advocate of privacy because of its disproportionate intrusions into poor and minority communities. Sidewalk labs (an Alphabet co] had long been on my radar for their hostile LinkNYC kiosks. Those data collection devices, advertising surface and machine learning ingestion points are deployed across NYC sans any public dialog.

Why wasn’t a democratic process put in place to understand the benefits of all their ‘urban-data’ collection?. A clear and tacit public understanding of ‘urban data’ collected and the associated value exchange is needed.

This week I watched stunned as Sidewalk Labs testified in Canada trying to defend their process in front of The House Ethics Standing Committee on Access to Information, Privacy and Ethics. It did not go well.

Misunderstood or Second-Order Thinking Failure?

The more I learn about Sidewalk Labs, the more I am completely puzzled by the massive missteps in rolling out their key offerings.

Surprisingly when digging into Sidewalk Labs’ vision reimagining cities to improve quality of life. Large scale data-driven smart cites, to raincoats for buildings, It’s not all evil empire. There is much to like of their views, framework and offering.

Dare I say – Sidewalk Labs may be misunderstood, most of it stemming from self inflicted wounds.

That said, there is no excuse for the tactical data directives or the lack of any kind of transparency playbook. If there was a plan, no one in the cities they are approaching or live in knows it. This is by design.

“For Alphabet, the project presents a chance to experiment with new ways to use technology — and data — in the real world. “This is not some random activity from our perspective. This is the culmination of almost 10 years of thinking about how technology could improve people’s lives,” said Mr Schmidt.

FT – Eric Schmidt

Ten years of thinking.

Let’s not go over all the tactical fails or massive strategic blunders. Let us instead focus on the single issue that every city in this nation needs to solve for right now.

The Hubris

Here in the US we are just beginning to understand our vulnerabilities around digital and social networks. The impact of psychological and behavioral targeting taking place needs to be understood and to what consequence.

This graph from the Toronto Star sums up the nations consciousness nicely.

It doesn’t take long before the idea of sensors tracking every move of every adult and child who lives, works or even passes through the district starts to sound ominous. Especially in an era when data collected for one purpose by one entity is routinely repurposed for an entirely different use, and the people at the centre of all that data are often completely unaware of what’s being done and to what end.

Toronto Star

Sidewalk Labs rolls out anyway and targets new cities that are now revolting against their dystopian secret offerings. Why would they launch before answering critical questions around the above aptly described dystopian future?.

It is because they have arrogantly decided for everyone what is acceptable ‘urban data’ for them to collect and use. We still do not know what that is.

At the launch of Hudson Yards aka Surveillance City, this quote stood out because it implies the use of facial recognition and emotion detection software.

We can say how many people looked at this ad, for how long. Did they seem interested, bored, were they smiling?” he said.

Related Hudson Yards president Jay Cross (Credit: University of Toronto)

Silicon Valley’s technology vision for cities – technology can make our lives better – Sidewalk Labs wants to be who we trust “to improve quality of life.” but their failure to engage the citizens they want to service is strategy that’s turning into a dance of thousand cuts.

Who Owns Urban Data?

The surveillance economics taking place are: Sidewalk Labs is harvesting our life events (aka – ‘urban data’) through behavioral analytics. That data is an asset class that becomes occurring revenue to benefit Alphabet /Sidewalk Labs shareholders – not the citizens of the city.

Questions around ‘urban data’ every city needs to define right now:

  • What defines public urban data ?
  • Do municipalities need to hand over control to private companies, why?
  • What demographic process took place to define this?
  • What urban data is now being considered personal at initiation? (are peoples gait, face, shape all considered fair game by entering the public space? we defined that, does it comply with the law)
  • What is the imperative to collect it?
  • Who are the deciders governance?
  • Who owns it and has access to it?
  • Who regulates it?
  • How can urban data be kept separate from online data?


The questions not being asked are even more important. Data collection points have been weaponized and the public is unaware. Sidewalk Labs role here should have been public utility not government/private spy org.

Any talk of governance of data needs to account for machine learning and AI capabilities. You don’t need to save data to derive value from it.

Democratic Process + Discussion

What where they thinking not setting these critical issues for public discussion?. Sidewalk Labs collect first, ask questions later is a mirror held up to the men at the table defending the position.

Whats Next

A legal injunction needs to be put in place stat to stop the deployment and collection of urban data. This process needs to be re-started.

Done correctly everyone could benefit. Right now Sidewalk Labs are setting themselves up for a potential fall in Canada, a real hatred for their NYC kiosks and future legal ramifications for their product.

It could have been a block party.

Related reading:

How China Turned A City Into A Prison [nyt]

A.I. Experts Question Amazon’s Facial-Recognition Technology [NYT] At least 25 prominent artificial-intelligence researchers, including experts at Google, Facebook, Microsoft and a recent winner of the prestigious Turing Award, have signed a letter calling on Amazon to stop selling its facial-recognition technology to law enforcement agencies because it is biased against women and people of color.

Hudson Yards Questions

If you live in Manhattan you have heard about Hudson Yards opening. They have a giant marketing tool called the Vessel. People were upset to hear that they surrendered copyright when taking photos on the marketing Vessel.

Take My Face, Leave My Photos Alone.

The irony is that no cared about deployed face detection, (identity) and emotion detection (feelings) being harvested covertly while you are on their property. Honest question do one somehow implicitly sign a TOS/EULA when you walk on to Hudson Yards?

Making Billionaires Vulnerable, Rest of US Just Data Sources.

It’s bing called “playground for billionaires” and “a mall for the wealthy.” “a billionaire’s fantasy city.” What billionaire would want to live in a place that was tracking when they come + go, how they feel, video records every step they take. I would like to meet them if only to harden their end points. One hopes the complex has significant info-sec protection of all that biometric data.

What does the lease say about data ownership – your data. What happens if HY gets hacked and your bio-metric info is leaked as a tenant. Just a second while someone unlocks that bank account with that info and move on the holdings in Monaco.

If Your HY Data Was Hacked – Would protest be allowed at Hudson Yards?

Is public protest allowed, or is this property that does not allow for the peoples voice, only their biometric data. (Only the best of the best when it comes to your personal surveillance at Hudson Yards.)

There are many questions around private data in the public realm, along with link blogging on the subject, I will try to raise issues here along side potential outcomes and opportunities to ‘be best’.


Related read: Data privacy experts flag ‘smart cities of surveillance’ at ITAC Smart Cities Technology Summit [itbusiness.ca]

Link Round up / ICYMI

When a Phone App Opens Your Apartment Door, but You Just Want a Key A lawsuit filed in October in Housing Court in Manhattan by the couple and three other tenants of the West 45th Street building demands that the landlord give them access to all the entryways without having to use a keyless entry system. But it also has opened a wider debate over privacy, ageism and renter’s rights that has inspired new legislation in Albany

How AI Will Rewire Us [theatlantic]

The Internet’s Endless Appetite for Death Video [NYT]
With the iPhone Sputtering, Apple Bets Its Future on TV and News [WSJ]

Ahead of two major shows, the painter Jonas Wood reflects on his early career — and the most unusual object in his studio. [nyt]

Tools:

XAIAn explain-ability toolbox for machine learning. Follows the Ethical Institute for AI & Machine Learning’s 8 principles. [h/t 4SL]

Google Analytics Opt-out Browser Add-on [Google/]
Google Hotels / Travel [Google]

PublicRadioFan.com features schedule listings for thousands of public radio stations and programs around the world

The Abstracted Absent Arbiter or The New Other

MEDIAEATER DIGEST TUE 25th, DEC, MMXVIII MERRY CHRISTMAS
 DISPUTE THE TEXT, SAPERE AUDE

The Abstracted Absent Arbiter or The New Other

The absent arbiter and providence obscurer are new mechanisms for control.

We are at the intersection of several concepts that have not fallen in place with such scale before. They form the steps towards a dystopian future, a place where provenance, disintermediation, anonymity, arbitration and AI all intersect.

When connected they raise the need …

  • For provenance across all digital assets. One of the greatest problems of our time.
  • To bring clarity to the abstraction and disintermediation of authority in critical decision making processes that impact our lives.
  • To implement AI’s with goals clearly stated and provenance established

This is important because the absent arbiter and providence obscurer make possible fake news, they make possible for accountability to be obscured.

We need to address provenance not for business need cases like tracking IP for profit, or DRM for protection, but more important – knowing source – is part of understanding truth, and that should supersede capitalistic or technological advancement needs.   Root always matters.

Visible examples of this abstraction when you are unnable to speak with a human on a help line,  or that decision making AI in a black-box abstracting us from the logic and the decision tree that changes your life.  These mechanisms of disintermediation are the ‘anonymity authority’ or ‘absent arbiters’.

Not coincidentally these are all intersections of AI optimization. Abstraction of authority in critical decision making is the mechanism that allows for the casualties of AI’s implementation.

Perhaps even more critical there is also the abstraction and appropriation or our cultural heritage . The corporate copyright against putting  ‘Hakuna Matata’  on a t-shirt  (like trademarking ‘good morning’ use on a tee-shirt). “It’s a common phrase we use every other day. No company can own it.”  Your history and language  “not yours” as the auctioneer says.  The provenance stolen, then claimed. Suddenly saying good morning in your native language on a t-shirt is a crime.

Numerous examples surface everyday in our lives. Located at the intersection of provenience and truth in the area of anonymity. Its at that intersection that we need to be most transparent – as these things ‘move men to action’ and change lives.  Our northern border legal immigration process is being abstracted by the current administration is another current example.

Without the ability to know the goal, motive and ownership of the new master abstractor we all  end up enslaved to prescribed fiction vs news and further disintermediated from things that move us forward. Be it basics like food, shelter, clothing  (2of3  of three are now services) to knowledge and wealth creation for prosperity.

Synthetic Manipulated Media Pt III

Ripe receptors in manipulated media swarms, architected to move men to action. 

This is the third in a series of posts on the media turn from human generated media to media that is algorithmically generated.  Specifically addressing synthetic manipulated media.

Points discussed this far.

  • Is ‘synthetic media’ a proper description for this type of media
  • Round-up of existing papers and discussions taking place.
  • Posit that the immediate term potential to be disruptive, destructive?

Why this is important

Video has the power to be a form of control and life changing force. Media that has deception as a core capability takes on increased importance.

In this post I would like to take a look at adjunct technology and how it might fit with synthetic manipulated media. Following up with a post that addresses potential course corrections that might improve potential outcomes.   

This algorithmically generated media that can deceive the viewer impacts both encoding and decoding of images. Visual interpretation one of the most primary human functions. 

As the saying goes… if our life were a 24 hour clock,  humans learned to read at 11:59PM. We are visual beings.

This is not even wolf in sheep’s clothing territory,  this is a wolf dressed as a dragon. (one of the reasons we should name it correctly)

We see more colours of green because it helped protect us from being eaten as early humans. Our visual acuity is a mechanism for survival.  If these systems of visual trust are disrupted the implications will be significant.

Todays digital landscape

Concern is this: synthetic manipulated media comes at a time when several other capabilities and behaviors are all hitting critical mass and the alignment of all these in the wrong hands is dangerous.

Everyday we see low-grade high-impact propaganda or information warfare campaigns. Couple this with synthetic media and current digital swarm behavior – driven by a divided tribal society and individual psychological targeting.  This formula sets up a perfect storm for infection.

Some of these digital environmental and human conditions include:

  • Cultural tribalism
  • Psychographic targeting
  • Media literacy/human receptivity
  • Synthetic manipulated media
  • Social networks
  • Swam behaviors
  • ML/AI

The term  ‘Cat and mouse”  game is the phrase used to speak around the issues that both law and tools are playing catch up ball- true to form falling behind significant technology developments.  We do not have the time for the solve to play out as the immediate term impact will be foundational

Swarm behavior

Have spent good bit of ink on ‘swarm behavior’ in digital spaces.  This is the ingredient that both acts as an accelerator and potential incubator that pushes false information to mainstream media channels.   It happens fast and is places that are not immediately seen. Often by the time a swarm hits critical mass its to late –  everyone who participated in it becomes infected in some manner.

There are analogs to look to: the fairness doctrine in TV, or the fact that you can’t just slap a hundred dollar bill down on a copier as potential frameworks or laws that can moderate technology preventing disruptive becoming destructive.  That’s the line in the sand we all seem to ignore until technology eats part of us in its evolution.

As someone who loves design,  the above diagram is a visual horror show it should only serve to illustrate a potential threat scenario  vs wow with visual brilliance. Please someone re-design this – please

Is ‘synthetic media’ the right name for media that is made to manipulate

Spent last week digging into what is being called ‘synthetic media’. I attended (the always amazing) NYC Media Lab 18 which had a panel ‘Great Synthetic Media Debate’ that discussed dynamics of this type of media, and a talk ‘Deepfakes and Other AI-Generate Synthetic Media: What Should We Fear? What Can We Do?’ by Sam Gregory from WITNESS which was significant in scope and implications.

‘synthetic media’ is being defined as  “algorithmically created or modified media ”   

This subject has been in the news because of deepfakes.

Deepfake, a portmanteau of “deep learning” and “fake”,[1] is an artificial intelligence-based human image synthesis technique. It is used to combine and superimpose existing images and videos onto source images or videos.  

The name ‘deepfake’ implies something not real, approaching it with discern is inferred in its name.  This is good.  The term ‘synthetic media’ fails at conveying meaning, intent and does not serve to warn the receptor.

The word ‘Synthetic’ implies something that is man-made, to be exact…

(of a substance) made by chemical synthesis, especially to imitate a natural product.

This is something made from computational algorithms, that might be splitting hairs for some, but where we really fall short is divorcing it from intent.   This is not only man made – it’s made to manipulate.  

A media whose primary focus is to deceive to receiver, be it in art or politics.   

This post is not about the ethical implications of this medium.  I want to stay focused on foundational lexicon while we still can.  

‘manipulated media’ more clearly states the goal of the AI.  

A media that is disembodied  from context (intent) and providence.  Context and providence are critical interpretation tools. We should name it for what it is, since what it does negates the basic rule of seeing is believing.  

It is important to use an accurate descriptor, it is dangerous not to.

As we weigh in on plus and minuses of this type of media the early tl/dr is –  we are doing ourselves a disservice by not correctly naming it for what it is. Be it manipulated or some other term that more clearly states providence, context and intent.

(this is the first in a series on this subject: ….’Ripe receptors in manipulated media swarms, architected to move men to action.’)