Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The UX of AI (design.google)
193 points by dsr12 on Jan 26, 2018 | hide | past | favorite | 62 comments


It looks like it's just a way of reassuring the "common people" that all AI systems are good, under human control, and helpful. Google's AI is often centered on surveillance, profiling, marketing, and exploitation. We are being spammed with pictures and videos of happy people playing with children and animals so that we learn to associate AI, and Google's AI in particular with happiness and innocence, while it is simply a very powerful and unimaginably advanced tool of corporate oppression.

I found a lot of factually incorrect statements in that article. Such as: "If a human can’t perform the task, then neither can an AI." Don't we create AI systems to solve problems human cannot all the time?


Yeah, this whole article seems... off.

> We are being spammed with pictures and videos of happy people playing with children and animals so that we learn to associate AI, and Google's AI in particular with happiness and innocence, while it is simply a very powerful and unimaginably advanced tool of corporate oppression.

I might be in a minority here, but I found those cute pictures off-putting. Like "too much sugar", "fake happiness" off-putting. Hell, some pictures/videos reminded me of Black Mirror. The only association I made here was that Google seems to focus on trivialities, on surface-level faux-happiness, instead of using AI research to empower people.

Beyond that, something bugged me about the whole narrative of "human-centric design", "we're just dumb developers, and this is social issue". Excessive humility signaling, or am I just imagining things?

To top it off, this highlighted sentence caught my special attention:

"The hardware, the intelligence, and the content ultimately belong to you and you alone."

It's funny to see this, because it's literally anathema to Google, and most of the businesses on the Internet. If that sentence was true, it would mean the product works offline, is fully self-contained and fully owned/controlled by the user. It ain't happening, and we all know it.


It's about creating an illusion that you can control it. Recently a lot of people have been invoking Terminator-esque visions of AI slipping out of control and becoming the worst threat to humanity's survival. Obviously if this were to become the prevailing image of AI in our culture, companies that work on AI would seem outright evil. That's why they're dishing out propaganda (because that's what this article is) to change the zeitgeist.


> I found a lot of factually incorrect statements in that article. Such as: "If a human can’t perform the task, then neither can an AI." Don't we create AI systems to solve problems human cannot all the time?

Not really, I think we create AI systems to solve problems humans take way too long to solve. But I can't think of anything AI does, that a human couldn't, given sufficient time and resources.

Regarding the rest, I fully agree with you. Things like this are nothing but scary to me


Ah, but certain problems are not just limited by the amount of time and resources that may be apportioned to them, but are bounded by needing to be completed with specific time and resource constraints.

And that's a spot where computers and programming show that they are capable of implementing systems that biological humans literally cannot under any circumstance. Our hardware/meat-ware is not capable of it.

If we maintain a willing suspension of disbelief, for sure a human being may be able to add up a million numbers correctly if they had infinite time and infinite resources (although in reality they would not be provisioned with those resources or time), but what no individual human can do is be presented with those numbers and add them up in under a second. Or to be more realistic and to draw away from our simple notion of individual problems and individual resource constraints, there are fields of knowledge resulting from coordinated-compound-tasks that require locally doing 20 disparate but coordinated operations in parallel in under a second.

The application of our modern day technology literally involves the creation and implementation of systems that no biological human or human-centric system can accomplish.

edit: of course, that might be a way of saying the problems that you said human beings take way to long to solve, but i think there's an important need to distinguish between those problems that we can't do because of algorithmic complexity, and those problems that we cannot do because we are not made/manufactured of the same stuff and able to be programmable with the same ease. Without blurring the definition of what it means to be human/computer, there are fundamentally problems that computer/AI can solve that we cannot.


I feel like the statement is fundamentally contradictory, seeing as how humans have invented and trained the AI systems. By definition, that is something that humans are doing.

On the other hand, it may be more insightful to say that if humans cannot perform the task (time is not a factor), than neither can AI. For example, assessing non-tangibles, or something abstract like identifying sarcasm in text without context.


I think this is dancing around a tautology - if humans can't do something "in theory", it means they can't describe it, which means they can't program it in AI. Obviously.


ML can handle things at scale (not just speed, number of data points) that humans can't. Imagine a classification problem with 1000 dimensions -- humans can't keep in working memory that many entries. Granted, many of these are trivially solvable if you can scale them (e.g. human with a computer but no AI/ML), but scale is an important component.

It's kind of like how metadata isn't that big of a deal until you have enough of it.


Given sufficient time and resources a human will likely recruit other humans to design and built AI to solve those problems.


I'd guess the correct form of that statement should be:

"If a human can't theoretically perform the task with a lot of time and a pocket calculator, then neither can an AI."

But it's not really saying much as that applies to almost everything.

As for the core idea of designing AI UX better to get people to trust it - maybe put a list of all the data being collected and stored up front, and make it possible for the user to easily and permanently delete specific data while still allowing them to use the system. Also an explanation of actions taken: "the system took this photograph because 76% of testers found photographs with these particular features (high light level, human looking at camera, action) to be desired." You could probably trace your dataset back to these features by collecting all the photos from training that matched to those features and manually labeling them.

In short: un-black-boxify your UX.


> un-black-boxify your UX

I thought the whole point of AI was that the program itself is effectively a black box, even to the programmers.

They ask a neural network to solve a problem; it tries repeatedly until it succeeds. The algorithm it arrives at is not necessarily sufficiently simple that a human could understand it, even given the source code.

The problem isn't just that Google won't tell the user how their software works; that's nothing new. The scary thing is that even Google doesn't understand how their software works; they just know that it does seem to work most of the time.

Hopefully I've misunderstood completely, and we're not entering an age of “the algorithm (that we don't fully understand but is extremely useful most of the time) says so, and we shouldn't question its authority”.


> I thought the whole point of AI was that the program itself is effectively a black box, even to the programmers.

No, it isn't. In fact, it's the opposite of a responsible approach to AI, and one of the reason why you have smart people screaming about dangers of uncontrolled intelligence explosion.

Being a black box is a feature of a particular method people now choose to do AI - namely, neural networks. Neural networks are basically chains of multiplications, additions and thresholding; you start throwing tons of data at them and have it tweak its coefficients, until the result sorta looks like what you want. The knowledge hidden in those coefficients is almost completely opaque to us ("an open research problem").

People use deep learning because it's effective, but I hope that we go back to methods we're able to trace and analyze before we attempt to bootstrap a general AI.


I think you have it backwards there. Computers can already perform all sorts of calculations that a human can't. AI aims to make computers solve problems that currently only humans can.


you are spot on, but I doubt that was the plan.

I felt it was that they gave a toy to someone they wanted to keep. that person just started photo training a neural network from scratch when google already have dozen of more mature such projects. his product wil go over the same mistakes as the more mature projects (see: gorila incident) but said person had to write grandiose progress reports to justify their toys and PR loved this one, probably for the reason you pointed out: that naivete that brings such calm and reasurance.


So you claim Google's AI is centered on survelliance, exploitation and corporate opression? Do you have anything to support this?


Is this a joke?


No I am actually serious, could you give specific examples of Google's AI efforts for surveillance, exploitation and corporate oppression.

I can think of many examples of more benign nature, e.g. strong go and chess playing AI, lots of algorithms in google photos, song recognition, Infrastructure optimization, spam detection, search improvements and ranking.

Or maybe you consider ai algorithms to increase ads targeting / precision oppressive or exploitative?


Tracking everything you do on the internet and offline using your phone including 24/7 location tracking and passive wi-fi mapping is not intrusive enough for you?


Do you think the designers of such services made them mainly for nefarious purposes (surveillance, exploitation and corporate oppression)?

Most of the things you mention has perfectly legitimate and useful functionality, they are usually optional and data is anonymized after a while, has option to completely disable / remove / see all data with a a few clicks.


Uh, you've missed the forest for the trees.

Google is an ad company, full stop.

Everything Google does is functionally built to track you and sell ads based on that tracking. Any "legitimate and useful functionality" exists to give you a reason to feed data into their ad company. Google Maps doesn't exist to help you get around, that has never been it's purpose and never will be. Google Maps exists to give you a reason to tell Google where you are and where you're going so they can target ads to you.


Well You can disable almost all of that and delete all your history permanently. Isn't it nice?


You can delete it from your view, but you can never delete it from Google's servers. And they (like Facebook and others) have an advertising id tied to your profile even if you don't have an account with them. Try deleting that.


No, when you delete your data (history etc.) it is also deleted from servers and backups. This was previously discussed on HN before, with some input from google SREs.


And you know this how? They can just remove it from your view, claim they deleted it, and keep it on their servers for future profiling, just like they do with people who do not have accounts with them.


Well it is google engineers words against random speculation, why sould they bother creating all that infrastructure for deleting stuff thoroughly. I take theirs.


> see all data

Yes? Where do I have to click to get a database dump of the actual data Google has on me? In machine-readable format, please?


My activities page is a good start.


Regarding the interaction of AI and user experience, I like to use the spell checker as example.

Think what it means that a black-box process looks over every one of your actions, sometimes deciding to "correct" some of them, more or less silently. Now think what it would be like if you couldn't see the list of alternatives that the AI is considering, and that you couldn't fix the mistakes made by the corrector.

Until we get person-like "hard" Artificial Intelligence, the desired model for AI is having a way to inspect all the possible alternate decisions that could be taken, and most important, having the possibility that a human overrides any one of those automated decisions (to the point of safely disabling the whole system in extreme cases).


Oh yes.

Reminds me of a recent observation by Scott Alexander:

Dear @apple - your OS has a global spellcheck that autocorrects names of medications to names of different medications, eg "duloxetine" to "fluoxetine", without telling the user. Some clinics inexplicably continue to use MacBooks. Please fix this before someone gets hurt.

(https://twitter.com/slatestarcodex/status/944739157988974592)

There's nothing more infuriating than a system that "knows better" and which you can't override (and have the override stick). Today's spellcheckers generally break whenever what you're writing isn't in single language, using common words. Which, for me, means pretty much all the time - since most of my chats and documents I write involve mixed Polish/English with domain-specific vocabulary. And even if we had General-AI-powered spellcheckers, there still would be a need to tell them no - after all, my spelling might be bad, but I might be making an explicit stylistic choice.

AI systems need not remove user's agency or control from the process.


MacOS is a service provided by Apple, not a tool that you control.

AI could be a very useful tool, but it's a very scary service.


Which is why I hate the very concept of Software as a Service.


Incidentally, we wouldn't even tolerate this kind of behavior with humans (or expect it to work).

Imagine if you wrote a book and the publisher assigned an editor before publication - an editor who knows nothing about the specifics of your book, who you're not allowed to talk with, who can make arbitrary changes to your book and whose decisions will be final.

You'd rightfully be outraged at that situation. Yet if the editor is an even less understanding computer program it seems to be generally accepted.


I liked the article. I have worked in the field of AI since the 1980s and this article gave me a different perspective.

Of course the elephant in the room is privacy concerns. It concerns me that Facebook and Google track our actions in ways that we can’t really opt out of. Facebook is worse.

As a decades long supported of the FSF, ACLU, and EFF my attitude has changed. I still occasionally donate to these organizations because they push back for our benefit, but, I live and work in a digital world and I want to be as effective as possible in that world.

Google Assistant helps me getting stuff done in ways that Apple’s privacy respecting Siri can’t do because of the information Google has. I would love to own the camera showcased in this article and at family and other social events get a few good pictures at the end of the day without having to think about taking pictures.

People get to make their own choices and for myself I mostly use duck-duck-go and a VPN so Verizon does not have my web history, but I compromise on using select Google Services.


Are Google still trying to brainwash everyone into trusting them? I find it funny when they are the ones who, if they could, would track our every movement to then try and sell us something?

I can understand the fact that while ML/AI are still fairly closed off systems as they are not understood by the general population that having someone with UX skills to be able to better design the interactions and help people trust AI/ML products (preferably not developed by Google)


I use Google Photos to back up all my cell phone camera photos.

It automatically generates galleries from trips I make, and selects the "best" photos.

I still review them, and decide if I agree on Google's ML agorithm's opinion of best (from a series of snaps of someone's face or a landscape)

While it's really good at removing blurry shots, aligning crooked ones, and discarding those where someone half-blinked, it cannot AT ALL tell the difference between a weird interesting endearing facial expression and a weird strange off-putting facial expression.

And that's because the real data is not in the image. It's in the personality that I know of the person. A picture of a person smiling who I know is dour or unhappy is the one I want. Meanwhile for a person who's incredibly photogenic and always has a flawless smile - but is usually consistent and bordering on insincere - I'd prefer a candid shot where they're not looking their best - but the real personality that maybe only I know of them is shining through.

These are not objective truths. My favourite photos of my wife are ones she hates. And it's not even about having a different SELF-image from those that others have of you - we also like different photos of our dog, and of our mutual friends.

No ML model will be able to pick that, and unfortunately this isn't just a case of "let's give them more data and wait".

If we trust the algorithms to make these decisions for us, we will just encourage the rounding of all the uneven edges of our world until we settle on a bland average existence where everyone has the same idea of what a "cute baby" looks like.

This is already happening with information bubbles on Facebook and Twitter.

The trend needs to be reversed, not doubled-down on.


> No ML model will be able to pick that

Everything you described actually sounds like a pretty straightforward challenge for ML; do online learning for preferred subject/expression pairs on a per-user basis.


Nope. If Katie once made a funny face because some situation on their trip to paris was hilarious, that doesn't mean I always/only want pictures of her where she makes that same funny face for the rest of my life.


Calling YouTube video recommender... calling Amazon... please listen up. I do not want to listen to that song I just heard, or buy that thing I just bought, dammit. The pattern recognition engines are getting clearly more intelligent, but what they then DO with that inference continues to be mostly useless. (From a user point of view.)


This doesn’t sound like an AI problem; it sounds like a context problem that no one (not even a human) besides yourself could resolve with the available data.


Right. And we don't need or want them to.

I want the smartest software engineers in the world to work on other problems than how to recommend better photos for me.


I'm actually kind of impressed by this bit:

> What if we could build a product that helped us be more in-the-moment with the people we care about? What if we could actually be in the photos, instead of always behind the camera? What if we could go back in time and take the photographs we would have taken, without having had to stop, take out a phone, swipe open the camera, compose the shot, and disrupt the moment? And, what if we could have a photographer by our side to capture more of those authentic and genuine moments of life, such as my child’s real smile? Those moments which often feel impossible to capture even if one is always behind the camera. That’s what we set out to build.

They are really taking the growing worries around ubiquitous smartphone usage and try to spin them into an argument for having your whole life constantly recorded by a camera.

That's some next-level PR skills right there.


I'm realising that The Circle was documentary not parody.


That was a great book (movie was good also). It will be “interesting” to see how the world will change over the coming decades, good and bad effects of technology. I am in my mid 60s. From my point of view, the acceleration of technological improvements is geometric and life feels different as each new wave of technology changes our lives. I started using computers by entering programs in octal after hand assembly and now I occasionally run jobs using thousands of CPU cores.


Google's PR department is hands-down incredible. It accomplishes Steve Jobs level marketing miracles on a daily basis. The difference between Google and Comcast is effectively the quality of their PR department, which should help quantify how effing amazing their PR department is.


So they are basically creating that creepy life recording product from S01E03 of black mirror?


No, they're planting a creepy artificial intelligence on top of Black Mirror's life recording product.


> If a human can’t perform the task, then neither can an AI.

False. I am applying machine learning to network security. The data are basically incomprehensible to a human but AI is quite able to find patterns in those huge amounts of data.


Indeed.

Human body has limitations. Maybe imagination which is considered limitless.

But, next use case: No human can sustain ~10G for more than a few seconds [0] and pilot anything that flies. An sufficient AI for flying has no issue in any G.

[0] https://en.wikipedia.org/wiki/G-force#Human_tolerance


Yeah, that statement sounds so arrogant coming from a human...


We are still in the nascent stages of AI UX. I’m excited to see where it goes


"If you aren’t aligned with a human need, you’re just going to build a very powerful system to address a very small — or perhaps nonexistent — problem."

I'm guessing a desire to avoid advertising or a certain corporation's tracking and holding data on me and my family isn't going to be deemed a sufficient human need :P

Additionally, like Facebook, I have no choice because even if I opt out (which I can't do without already opting in), someone else out there taking video and photos and including me opts me in regardless of my feelings.

And yet, apparently cameras and video analysis with the convenient side effect of tracking and identifying me and my loved ones to produce even more Facebook-wall worthy images is a human need.

A cynic (realist) might observe these principals aren't enough to stop one working on a very powerful but non-existent problem that not only doesn't meet the real human need, but might actually be, in overall summation, detrimental :(


> If you aren’t aligned with a human need, you’re just going to build a very powerful system to address a very small — or perhaps nonexistent — problem.

If you have to "align with a human need" and that problem is not obvious, this sounds like a solution looking for a problem.


> If you aren’t aligned with a human need, you’re just going to build a very powerful system to address a very small — or perhaps nonexistent — problem.

A common theme in AI research from major software companies who are throwing thousands of developers and billions of dollars at this. AI isn't a giant monolithic application like a search engine. So long as the vision of AI remain around single, closed, data-oriented, monolithic applications AI will continue to be as impressive as it is now. It will get faster as the hardware gets faster, but it won't be what tech evangelists are hyping it up to be.


What if the people I'm familiar with is the entire population of the country?


Well then the AI will help you capture every moment you care about.


To be honest, whilst reading this article I was more "shocked" by this 'clip' and it kind of took away from the information they were trying to convey.

This looks creepy to me, and I probably would not trust the device enough to use it for important moments. And for the not-so-important moments I really don't need pictures of videos.

I realise the point of this article was not really this specific product, but damn, it did take my focus away from the content.


Quite surprised at the use of gifs on this page, just the images weigh in at 92MB.


Besides the PR, even the purported goal is less than worthless. Clearly what photography needs is more banal snapshooting, now with editorializing AI enforcing utter conformity.


Re: the kid with the basketball. How is this the best photo? The AI chose a picture centered on the basketball, but which trimmed off the top of the child's head.


I'm extremely cynical and distrustful of AI-first. I don't accept the three "truths" FTA:

>>>>>

We developed the following truths as anchors for why it’s so important to take a human-centered approach to building products and systems powered by ML:

1.) Machine learning won’t figure out what problems to solve. If you aren’t aligned with a human need, you’re just going to build a very powerful system to address a very small—or perhaps nonexistent—problem.

2.) If the goals of an AI system are opaque, and the user’s understanding of their role in calibrating that system are unclear, they will develop a mental model that suits their folk theories about AI, and their trust will be affected.

3.) In order to thrive, machine learning must become multi-disciplinary. It's as much–if not more so—a social systems challenge as it's a technical one. Machine learning is the science of making predictions based on patterns and relationships that've been automatically discovered in data. The job of an ML model is to figure out just how wrong it can be about the importance of those patterns in order to be as right as possible as often as possible. But it doesn't perorm [sic] this task alone. Every facet of ML is fueled and mediated by human judgement; from the idea to develop a model in the first place, to the sources of data chosen to train from, to the sample data itself and the methods and labels used to describe it, all the way to the success criteria for the aforementioned wrongness and rightness. Suffice to say, the UX axiom “you are not the user” is more important than ever.

<<<<<

My take:

1. I like the "human-centric" AI goal, but in the end we're inevitably going to move slow, faulty humans out of the loop. No matter how "thoughtful" we are along the way, the inevitable result is that we always want to take humans out of the loop to save money and avoid hard work and instead automate it away, because of sheer economics. It's a mistake to think AI is not going to figure out what problems to solve, or that we are not going to let AI make its own goals (it already makes its own subgoals at multiple levels). We are continually incapable of figuring out what problems to work on. While the planet warms up, we waste energy on cryptocurrency bubbles. While the ocean fills up with trash we make chat apps and self-driving cars and better smartphones. These decisions are dumb. We'll eventually realize that we suck at focusing on the right problems and put AI into the decision making processes. Worse, we'll want to (stupidly, IMO) pursue artificial general intelligence to do that. Each step we get closer we'll just hope for more and better outcomes from the AIs, and cede more and more decisions to the AIs. And then one day we'll find it's terrifying to have an inscrutable AI develop its own goals at a superhuman pace.

2. AI's goals are always opaque. Especially when that AI system is run by a massive opaque for-profit corporation. We all know what such corporation's ultimate goals are: make as much money for shareholders as possible. It's great that we tell ourselves stories about how awesome AI is for people, but make no mistake, AI is just the latest weapon in the ever-escalating pursuit of perpetual exponential growth. Who the hell knows what its goals will be in the end.

3. This is an unparseable smokescreen going in six directions at once. Translation: we have no idea what we are doing or why, or whether it is going to work. But we are committed to it, and we're full of a blind faith in technology. Maybe if we had more Sociologists it will work?


More ads from the ad company. yawn


Is the clips still for sale?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: