Carol Smith’s talk was also about Artificial Intelligence. If Seth Earley’s keynote was justifying why Information Architects need to be involved in Artificial Intelligence, Carol’s talk was a primer of how to start.
Now for IA in the Age of AI: Embracing Abstraction and Change with @carologic #IAS18
— Anne Gibson (@perpendicularme) March 23, 2018
Technology is imperfect. It’s made by humans, who are not perfect. We’re messy #IAS18
— Anne Gibson (@perpendicularme) March 23, 2018
IAs need to work to make sure that users are at the center of whatever we’re doing. We need to work to mature organizations’ approaches to problems #ias
— Anne Gibson (@perpendicularme) March 23, 2018
Make sure that AI systems have what they need. Like a garden. Water regularly withinfo. Thin poor performing models. Prune functionality. Cull broken and biased models. #IAS18
— Anne Gibson (@perpendicularme) March 23, 2018
AI is when a machine is exhibiting intelligence – taking info from the environment, incorporating it in the situation, make decisions to maximize chances of success #IAS18
— Anne Gibson (@perpendicularme) March 23, 2018
AIs only know what we teach them, they only know a narrow area, and they may not know nuances, but it’s all based on algorithms. AIs are actually kind of stupid #IAS18
— Anne Gibson (@perpendicularme) March 23, 2018
Who are the users of a law care company? What info do we have? Can we great some ground truths and begin to teach an AI? This is information architecture. That requires a deep knowledge of the information. #IAS18
— Anne Gibson (@perpendicularme) March 23, 2018
This is not how humans learn. Human babies learn through our senses and get it from everything. We gain information from a lot of sources. Computers only learn what you give it, and it’s mostly text or image based. Can’t learn the way humans do. Very narrow #IAS18
— Anne Gibson (@perpendicularme) March 23, 2018
3 basic areas of focus:
Data and Ground Truth
Artificial Intelligence
User Experience#IAS18— Anne Gibson (@perpendicularme) March 23, 2018
Gather user needs. Review the results. Replace models that don’t work #IAS18
— Anne Gibson (@perpendicularme) March 23, 2018
Note that if you only get data from a few people, you may get unexpected biases, like “the lawn care folks who take notes also happen to be ones that are heavy chemical users” #IAS18
— Anne Gibson (@perpendicularme) March 23, 2018
AI is only as good as the data put in and the time spent improving them. (This is not “set it and forget it”) #IAS18
— Anne Gibson (@perpendicularme) March 23, 2018
Western Educated Industrialized Rich and Democratic (WEIRD) people are the ones that most AIs and studies are done on. That’s a big set of biases. There are a lot of outliers that are “the rest of the world” #IAS18
— Anne Gibson (@perpendicularme) March 23, 2018
Data is written by the victors – and their biases. When we say “lawn care specialist” do you picture a woman? Probably not. #IAS18
— Anne Gibson (@perpendicularme) March 23, 2018
All data is biased. Data created/charged by humans inevitably carry forward their biases. @carologic #IAS18
— IAC – information architecture conference #IAC24 (@theiaconf) March 23, 2018
Who is working on the collection? Is it diverse? #IAS18
— Anne Gibson (@perpendicularme) March 23, 2018
Who will use the system and why? What are their goals? What are they trying to solve? Is it a problem a computer can solve? Is the AI the right solution? What is out of scope? Are they working together? How can they collaborate? #IAS18
— Anne Gibson (@perpendicularme) March 23, 2018
What are the potential unintended consequences? What are their fears? How can you address those fears? Prepare for fears to protect users #IAS18
— Anne Gibson (@perpendicularme) March 23, 2018
The AI management will need to match the org ecosystem – microcosm of the org. Someone needs to gather data, curate content, watch for issues. How will the system that exists support this new tech? #IAS18
— Anne Gibson (@perpendicularme) March 23, 2018
AIs take a lot of work. You can’t just plug in the data. #IAS18
— Anne Gibson (@perpendicularme) March 23, 2018
You must model relationships to teach terms #ias18
— Michele Marques (@ms_marques) March 23, 2018
The work you’re doing is creating relationships between information. “This means that, this is a child of that”. Yes, they really are that dumb. You can get models from others but you’ll probably have to adapt them, for language, etc. #IAS18
— Anne Gibson (@perpendicularme) March 23, 2018
Understand the problem deeply and make sure we’re building the right AI. If you pick the wrong AI system you will not be able to solve the problem. #IAS18
— Anne Gibson (@perpendicularme) March 23, 2018
Even if you’re reading huge amounts of data the AI can still be wrong #IAS18
— Anne Gibson (@perpendicularme) March 23, 2018
The systems to teach self-driving cars are much more complex than the systems to teach a computer to play Go. Image pattern detection is even harder than that. #IAS18
— Anne Gibson (@perpendicularme) March 23, 2018
How trusting are the users of AI? @carologic works for Uber, she knows that people don’t trust self-driving cars. How do you engender trust? Add “fun” to show that it’s working #IAS18
— Anne Gibson (@perpendicularme) March 23, 2018
What is going to change when the AI is introduced? What value does it bring? What will improve? Will the system do some things better or faster? What is not going to be improved? #IAS18
— Anne Gibson (@perpendicularme) March 23, 2018
Indicators of ethics:
Privacy
Accuracy
Property
Authority #IAS18— Anne Gibson (@perpendicularme) March 23, 2018
Keep people and data safe. When unintended consequences arise, what are you going to do? Make contingency plans. Look for warning signs and help users recognize warning signs. What is the worst potential action? #IAS18
— Anne Gibson (@perpendicularme) March 23, 2018
Make sure we can unplug the machines. WE need to make sure there are breaks and black doors and ways to shut it down. A system without a back door to shut it down is irresponsible. #IAS18
— Anne Gibson (@perpendicularme) March 23, 2018
Accuracy – we need LOTS AND LOTS of data. It needs to be high-quality. It needs to be harmonized. We need to align terms for inconsistent fields. Misspellings will happen, how will you handle them? #IAS18
— Anne Gibson (@perpendicularme) March 23, 2018
Where are you on the continuum of accuracy? The higher accuracy you need the exponential effort you need to maintain it. What is good enough? What would you be ok with? At what point do the users trust the system? #IAS18
— Anne Gibson (@perpendicularme) March 23, 2018
Who owns the data? What must they reveal? What does the org own vs the user? What does the org have the right to access? What are the barriers to getting to the data? Who can access the AI and how? #IAS18
— Anne Gibson (@perpendicularme) March 23, 2018
Will someone have access to the system but not access to the data they need? #IAS18
— Anne Gibson (@perpendicularme) March 23, 2018
Who gets to use our tools? Who are we making these for? If we do not design for people with disabilities we’re being ableist #IAS18
— Anne Gibson (@perpendicularme) March 23, 2018
Who owns the ethics? Is there someone who should be specializing and have the responsibility to look at the ethical consequences of the decisions? Is it a shared responsibility? What happens when disagreements arise? #IAS18
— Anne Gibson (@perpendicularme) March 23, 2018
What is the data based on? Are they blogs and a wiki, or trusted journals? How do I know this is a trustable source? Who trained it? Who tagged individual items? How do I know I can trust that? Transparency! #IAS18
— Anne Gibson (@perpendicularme) March 23, 2018
Acknowledge that there could be biases! If we don’t present data clearly enough a bad decision could be made – make that clear! #IAS18
— Anne Gibson (@perpendicularme) March 23, 2018
How do we show a user what “wrong” or “biased” looks like? If we do the work to show them what it looks like, then not only do we know, but our users can help us monitor for problem because they’ll know issues when they see them #IAS18
— Anne Gibson (@perpendicularme) March 23, 2018
How can someone contest or record a problem if they see it? There’s a law coming up in the EU in May to allow users to appeal decisions made by AI #ias18
— Anne Gibson (@perpendicularme) March 23, 2018
Crazy and evil both can happen. We need to enforce against them. Confidence in content is a step for that. Think through how our users understand why the AI suggests something #IAS18
— Anne Gibson (@perpendicularme) March 23, 2018
As we continue to teach and monitor, we water with data, thin, prune, and cull. If we don’t ask tough questions, who will? Do we want these systems to be as respected as a well-trained humans? Teach the AI to share our values. #IAS18
— Anne Gibson (@perpendicularme) March 23, 2018
Make sure we build ethical AI. Create a code of ethics/conduct so your organization agrees on what’s important to you and everyone agrees on what is too far #IAS18
— Anne Gibson (@perpendicularme) March 23, 2018
Take responsibility to keep humans in control. It should do things for us. Hire people affected by bias so they can help you make sure you’re making great systems. If you don’t have a diverse team how do you know you’re not inherently biases? #IAS18
— Anne Gibson (@perpendicularme) March 23, 2018
Conduct regular audits. Systems don’t “go crazy” – they do things for reasons, and you need to find those reasons. Do retrospectives. #ias
— Anne Gibson (@perpendicularme) March 23, 2018
Learn to make ethical transparent and fair AI #IAS18
— Anne Gibson (@perpendicularme) March 23, 2018
“I guarantee the AI did not become sentient” when something goes wrong. Demystify the AI. Use plain language. Teach people how to use the system. Provide easy ways to raise concerns. #IAS18
— Anne Gibson (@perpendicularme) March 23, 2018
The key is encouraging skepticism. Healthy skepticism that’s constructive is important. Don’t listen to the experts and trust everything they say. Question them and make sure what they’re saying really is true. Allow and encourage good discourse on questions. #IAS18
— Anne Gibson (@perpendicularme) March 23, 2018
What does an AI expert do? They may be an expert in writing algorigthms and talk to experts to get information, then painstakingly learn data from them. #IAS18
— Anne Gibson (@perpendicularme) March 23, 2018
In an industry like transportation, how do you prepare for the inevitable thing going wrong when it’s so awful? Do everything we can #IAS18
— Anne Gibson (@perpendicularme) March 23, 2018