Ethics in AI: Is this a thing? If not, how do we make it one? Incorporate it from the start, of course!

Do repost and rate:

They terk err jerbs!

"It might sound like an overstatement, but if your product is not going to incorporate AI in some shape or form, there’s a good chance that you will become obsolete. All of the tools that we love and already use have some AI features in place, or they’ve announced that they will."

?— Guytis Markevicius; Digital Product Design expert; Designer Debate: AI Ethics in Design

With the current hype surrounding AI and the rush to put it in almost every new product a company develops, developers face new challenges (including the possibility of AI being used to automate/do work that was previously done by people). That bothers me.

Look, I'll admit that I use generative AI and find it useful, because it allows me to get a computer to create things that I otherwise couldn't. I don't have a problem with that. My concern is that AI will get better than I am at the tasks I do reasonably well and for which I have developed/practiced skills. When an AI can develop software that's more extensible/modifiable and reliable/stable (fewer bugs), then I'm in trouble, particularly if it gets better at it faster than I can level up my knowledge and skills. I don't even know if that's already part of why I'm struggling to find work or if it's just because I'm currently behind the curve. I certainly hope not.

What happens when AI does our thinking for us?

If you've ever written code as a professional developer, you've likely done it.

TL;DR: The Internet is distracting you and making you stupid.

First, it was supposedly passively watching TV that made us stupid. Then it was the Internet/Web that was doing it by eroding our ability to concentrate. Then smartphones kept distracting us. At time of writing, AI isn't at the level where it exhibits much more intelligence/knowledge than that of a small/preteen child, despite the fact that people attempt to use ChatGPT to write essays and blog articles (and fail miserably in most cases). However, AI's capabilities will improve with time and it's clear that people certainly want it to, so that they don't have to make the effort of thinking for themselves. "Copying and Pasting from Stack Overflow" got an "O Rly?" fake book cover for a reason: People do it a lot. I've done it so much that I ended up creating com.stackoverflow.{lang} libraries/namespaces/modules/packages of useful code for projects on which I've worked. I don't disclose that I use it (although perhaps I should), since anybody can look in the "libs", "vendor" or appropriate directory and see that it's there. Getting AI to write code for us seems to be the logical progression of that (Copying and Pasting from ChatGPT), IMO. Over-reliance on anything, AI or otherwise, is generally bad for us in the long term.

"I fear that when thinking is automated, and you can [seemingly] get an answer to any problem, the mantra of the future might be: “I don’t have to think, because the AI will just get me the right answer.”

Just type a prompt and an email pops out, and it’s exactly what you want to say. Is that really my thinking? Am I a creative director or am I just a consumer repurposing a machine’s content? That’s where the danger may come in."

?— Darrel Estabrook; UX Design expert; Designer Debate: AI Ethics in Design

There's a UX/UI principle of "Don't make me think", popularised by Steve Krug in his book of the same name. The gist of it is that complex/complicated software should have simple interfaces that don't present a steep learning curve (because most end users, enabled by MS Windows and software for it, tend to be lazy/stupid). However, people tend to misunderstand that as "do my thinking for me" because they can't be arsed to do it themselves. Many of us (Millennials and zoomers) can't do basic Maths problems without reaching for our calculators or remember a birthday or phone number without using our smartphones. In what ways will AI enable us to be even lazier and use less of our brains than we already do if it does our thinking for us? Why would anyone go to the trouble of thinking for themselves when an AI or device can do it for them, and faster? Should it even be allowed to? (If not, how do we prevent that from happening, since one of the main aims is for machines to learn and improve their data-based "knowledge"?)

"It's a positive tool that can help us to eliminate some of the repetitive tasks that we find annoying, like generating a bunch of prototypes or coming up with nine different variations of small buttons. Things like that do not really require a lot of experience — it just requires time. Designers can talk with people, we can try to understand what they want, and AI is not able to do that. But it can eliminate the boring stuff for us and allow us to do more strategic thinking for our clients."

?— Guytis Markevicius; Digital Product Design expert; Designer Debate: AI Ethics in Design

One does have to keep in mind that, for all it's ability to process huge amounts of data in large language models (LLMs) and simulate creative/original thought/output, AI actually do anything more than combine, in multiple ways, or reproduce what already exists. That's because it can't actually think like we do (as discussed in the AI for Humans posts I wrote). True originality is a form of natural/biological intelligence currently beyond its capabilities and likely will be for a very long time if the underlying computing architecture isn't radically changed. At best, it can automate the boring stuff. I do wonder about this, though: If the simulation is good enough to appear original (not unmistakably AI-generated), does that matter? To my mind, it doesn't.

"Right now, there are two major boundaries to be figured out in AI and ML.

The first one is artificial general intelligence (AGI). This is starting to become a large focus area (e.g., DeepMind’s Gato). However, there’s still a long way to go until AI reaches a more generalized level of proficiency in multiple tasks, and facing untrained tasks is another obstacle.

The second is reinforcement learning. The dependence on big data and supervised learning is a burden we need to eliminate to tackle most of the challenges ahead. The amount of data required for a model to learn every possible task a human does is likely out of our reach for a long time. Even if we achieve this level of data collection, it may not prepare the model to perform at a human level in the future when the environment and conditions of our world change.

I don’t expect the AI community to solve these two difficult problems any time soon, if ever. In the case that we do, I don’t predict any functional challenges beyond those, so at that point, I presume the focus would change to computational efficiency—but it probably won’t be us humans who explore that!"

?—Daniel Perez Rubio; data scientist, developer and NLP Engineer; Ask an NLP Engineer: From GPT Models to the Ethics of AI

If you've never watched Office Space, I highly recommend that you do.

AI cannot understand what people want, no matter how detailed the prompts we give. Then again, it can be argued that software developers aren't particularly good at that, either, when it comes to client/customer requirements. We need people with people skills, whom understand both clients (Eloi) and developers (Morlocks), if you'll indulge a metaphor, to act as liaisons.

"Generative AI is good news for experienced designers, but bad news for junior designers because AI can do a lot of the boring stuff that companies hire them to do—like checking if everything is pixel-perfect, or creating initial drafts for user personas."

?— Guytis Markevicius; Ibid

"With AI, we're not just speeding up the mundane or eliminating some processes. The act of problem-solving is now in a box: You can give it a fuzzy parameter and get a directional result. So why wouldn't you use it? It’s there.

As a design coach, I want to encourage junior designers to branch out from that, to take this generative content and use it as a stepping-off point. Otherwise, you’ll just pull up the most convenient AI model and take its output and think you're solving a problem. And you may actually solve a problem! It may work in some very low-needs kind of situations. Examples might be checking a spectrum of colors in a palette to see if they all pass accessibility and perceptibility thresholds, or building a set of UI form elements from a sample text input design.

But for more complex problems — like generating an executive dashboard based on a financial services data set, or generating a multi-screen workflow based on user interviews — I think the question is, Where do we plug AI in?"

?— Darrel Estabrook; Ibid

AI is definitely going to make a lot of jobs redundant, but it's also going to bring with it opportunities for creating new ones (just like any major technological change before it). As I see it, the only way to not be replaced or made redundant by AI is to learn how to develop, guide and incorporate it, to be on the right side of change/history. (If you can't beat it, embrace it.) The focus is probably going to shift to asking these questions: Can you use AI? How will you use AI to solve a particular problem? More importantly, did you choose any particular AI-driven tool to do so? There's still going to be creative thought and reasoning required by those whom use AI to solve problems.

"As a designer, you want to be in demand. So obviously you need to have the skills to work with AI and understand how it works. I would definitely say learn what it's doing, but it is impossible to use everything. Just try to get the gist of what's going on, where things are moving, and start learning how to create interfaces that help users interact with AI."

?— Guytis Markevicius; Ibid

The Ethics of Using AI

One of the questions that comes up is this: If you use AI in your work (as most of us either do/will), should you disclose that? On the one hand, I feel that the answer should be "yes". I want to know what technologies are used in the products I buy/use and in their creation, so that I can make informed decisions about if/how I use them. On the other, I'm concerned that if I disclose that use AI to generate the digital art I'm hoping to sell, that'll somehow diminish the perceived value of it (even though I wrote the scripts to access the APIs, unaided by AI when doing so). If a computer generated it, is it art? At this stage and in that specific context, though, the answers are pretty much moot since I have no subscribers to my Patreon. In a general and wider context, they might well be valid.

"The key is that I'm using my creativity to make adjustments along the way. But as a creative person who stands behind the work I do, I have a hard time putting my face in front of the generative AI content and saying, “I did this.” Trust is a foundational ingredient in any relationship, and it has to be earned. If I were to use AI in the process, I would let the client know it. “Show your work” is not just a good adage for math proofs — it's good design practice for everyone."

?— Darrel Estabrook; Ibid

That sums up my moral dilemma with using generative AI to produce digital art. Yes, I wrote the code to leverage APIs and I thought up and adapted the prompts I feed the software, based on feedback/results, but I didn't the images that AI spits out. Does that make them of less artistic merit, worth less money/crypto, than if I'd done them myself using learned skills and honed talent? I suspect that it does. I suspect that's why I get very little interest in them (or it could just be that I suck at marketing).

Given the fact that people that Google and Meta use AI algorithms that are actually harmful to them (emotionally and mentally manipulative, etc.) and use their products (because they're "free" and "convenient"), I can assume that most people don't actually about the use of AI and the impact it has, whereas I do. That could be used as grounds not to declare what tools one uses, but is that ethical? I don't disclose what editors/IDEs and servers I use for the software I make, because it's not relevant to their functionality. However, if I was left with no choice as to incorporating AI or elements of the MEARN stack, then I would disclose that. I feel I have a responsibility to warn people of potential dangers to their online privacy and/or security so that they can make informed decisions. I suspect few share that stance, given the fact that it was only by looking at Fiverr's HTML source code that I discovered it uses Google and FB code (including Node and React); nothing else on the site indicated that.

If you're presenting a research paper or thesis, then one has to cite one's sources. If AI is used in the preparation or research, then it is reasonable to assume one would have to state what tools one used and which results were provided by AI. As is, when I use photos from Pexels for thumbnails on my posts, I state as much. Using content generated by AI should be treated the same way as using stock photos: Inform your clients that you have done so, since they have to trust your choices and process(es). Any mistakes you make or don't correct are your responsibility, not that of the tools and resources you use, AI or otherwise.

That AI can generate biased/fake news content, financial data or other misinformation (referred to by AI experts as "hallucinations"), in addition to it's massive growth and influence, as well as the push to incorporate AI into existing technologies or create new ones that leverage it, is cause for concern and the need to have ethical guidelines in place.

While the ethics and ramifications of using AI may have been debated and codified for years to decades, the fact that this technology has gone from being theoretically possible to actually existing (and bringing with it certain problems and risks) hastens the need for guidelines to be put in place.

Is it acceptable for AI/persuasive tech to manipulate users in order to achieve a corporation's goals, with no regard to how that manipulation affects their emotional and mental health? I (and others, such as Tristan Harris) think not, but Mark Zuckerberg and Sheryl Sandberg don't share that view. Nor will they do anything to change that if left to fix their products by themselves. (To them, people are little more than numbers, the data behind their engagement and growth metrics. Furthermore, acting ethically/responsibly and with transparency is perceived to be bad for business and thus neglected. I've seen that attitude on a smaller scale, working for a MNC that made devices for foot counts and vehicle guidance in malls: The people moving across/under the sensors end up being viewed as nothing more than a means of generating numerical data. It's easy to lose sight of the fact that that data is generated by actual whom are more than "a few pixels on a screen". One of the devices the company made was a "ticket validity scanner" for airports and train stations, which was marketed as being smart enough to read a ticket and determine if it was valid. What it actually did was turn on a green light if a piece of card, even a blank one, was placed between an infra-red transmitter and receiver; no more and no less.)

There is also the fact that AI models tend to exhibit the same biases of those whom create and train them. The data set has to be both large and diverse in order to mitigate this risk. It is not ethical to use a biased system for making critical decisions that affect people's lives (such as medical diagnosis or if one was to build a crime-prevention system similar to that depicted in Minority Report, for example.)

How does one effectively navigate the challenges of authenticity, transparency, and clear data ownership rights? Do I own the rights to content produced by generative AI or do the developers of the AI software?

Here are two main concerns regarding generative AI (according to the 5 Pillars of Responsible Generative AI article on Toptal):

  • Deepfakes/Misinformation: "Generative AI can accelerate misinformation due to the low cost of creating fake yet realistic text, images, and audio. The ability to create personalized content based on an individual’s data opens new doors for manipulation (e.g., AI voice-cloning scams) and difficulties in detecting fakes persist." Hell, you can find ultra-realistic deepfake depictions of celebrities in the nude or engaged in compromising activity (both images and video) if you look hard enough. AI adds a different dimension to "the fappening". Possibly worse, there's a deepfake viral propaganda video on the Web, of Joe Biden saying some inflammatory/provocative things about the Russian-Ukranian war, which is sure to heighten hostilities or even further divide the Left and Right contingents in the USA. Never mind the Deep State, you've got the Deepfake State!
  • Bias/Prejudice: Human bias has always been a big concern for AI algorithms. It perpetuates existing inequalities in major social systems such as healthcare, housing, job recruiting and the criminal justice system. The Algorithmic Accountability Act was introduced in the US in 2019, reflecting the problem of increased discrimination. Generative AI training data sets amplify biases on an unprecedented scale.

"Models pick up on deeply ingrained societal bias in massive unstructured data (e.g., text corpora), making it hard to inspect their source. The risk of feedback loops from biased generative model outputs creates new training data (e.g., when new models are trained on AI-written articles).

In particular, the potential inability to determine whether something is AI-/human-generated has far-reaching consequences. With deepfake videos, realistic AI art, and conversational chatbots that can mimic empathy, humor, and other emotional responses, generative AI deception is a top concern."

?— Heiko Hotz, senior solutions architect for AI and ML at AWS

As previously mentioned, the question of data ownership — and the corresponding legalities around intellectual property and data privacy — is an important issue. Large training data sets make it difficult to gain consent from, attribute, or credit the original sources. Compounding the problem is advanced personalization abilities that mimick the work of specific musicians or artists (such as Adele or Lady Gaga), creating new copyright concerns. In addition, research has shown that LLMs can reveal sensitive or personal information from their training data. An estimated 15% of employees are already putting business data at risk by regularly inputting potentially sensitive company information into ChatGPT, according to Toptal.

"To combat these wide-reaching risks, guidelines for developing responsible generative AI should be rapidly defined and implemented. Ethical generative AI is a shared responsibility that involves stakeholders at all levels. Everyone has a role to play in ensuring that AI is used in a way that respects human rights, promotes fairness, and benefits society as a whole.

Developers are especially pertinent in creating ethical AI systems. They choose these systems’ data, design their structure, and interpret their outputs, and their actions can have large ripple effects and affect society at large. Ethical engineering practices are foundational to the multidisciplinary and collaborative responsibility to build ethical generative AI."

?— Ismail Karchi; Data scientist; built a GPT chatbot at a healthcare startup, developed a customer profiling system impacting millions

Building responsible generative AI requires investment from many stakeholders. // Image from Toptal Engineering

According to Karchi, achieving ethical/responsible AI requires embedding ethics into the practice of engineering on both educational and organizational levels, from the ground up (rather than as an afterthought to be incorporated into existing models):

"Much like medical professionals who are guided by a code of ethics from the very start of their education, the training of engineers should also incorporate fundamental principles of ethics."

There are five proposed pillars of ethical AI: Accuracy, Authenticity, Anti-bias, Privacy and Transparency. Below I've extracted the important parts of the Toptal Engineering article's explanation of these:

  • Accuracy: With the existing generative AI concerns around misinformation, engineers should prioritize accuracy and truthfulness when designing solutions. Methods like verifying data quality and remedying models after failure can help achieve accuracy.
  • Authenticity: Generative AI has ushered in a new age of uncertainty regarding the authenticity of content like text, images, and videos, making it increasingly important to build solutions that can help determine whether or not content is human-generated and genuine. As mentioned previously, these fakes can amplify misinformation and deceive humans. For example, they might influence elections, enable identity theft or degrade digital security, and cause instances of harassment or defamation. Techniques for combating this include deepfake detection algorithms, digital watermarking/steganography and leveraging cryptographic/blockchain technology (such as Filecoin and Storj) for secure storage of digital assets.
  • Anti-Bias: Biased systems can compromise fairness, accuracy, trustworthiness, and human rights—and have serious legal ramifications. Generative AI projects should be engineered to mitigate bias from the start of their design. There are two main methods of doing this: Diverse Data Collection (the data used to train AI models should be representative of the diverse scenarios and populations that these models will encounter in the real world) and using bias detection and mitigation algorithms (data should undergo bias mitigation techniques both before and during training, such as using adversarial debiasing, which has a model learn parameters that don’t infer sensitive features).
  • Although there are many concerns about general data privacy (including consent and copyrights) with AI, this pillar focuses on user data privacy in particular. AI makes data vulnerable in multiple ways: It can leak sensitive user information used as training data and reveal user-inputted information to third-party providers, which happened when Samsung's company secrets were exposed. One of the best ways to protect such data is to host an LLM either on-premises or in a secure private cloud only available to the specific company using it.
  • Transparency: Transparency means making generative AI results as understandable and explainable as possible. Without it, users can’t fact-check and evaluate AI-produced content effectively. While we may not be able to solve AI’s black box problem (creating AI algorithms in such a way that they explain what they're doing and why) anytime soon, developers can take a few measures to boost transparency in generative AI solutions. One of these is having AI include references to the source material for content it provides. Another is to clearly state what features of a product use AI and the type thereof.

Persuading businesses (particularly large corporations) to get on board and change their culture(s), however, may prove difficult.

Generative AI ethics are not only important and urgent — they will likely also be profitable. The implementation of ethical business practices such as ESG initiatives are linked to higher revenue. In terms of AI specifically, a by The Economist Intelligence Unit found that 75% of executives oppose working with AI service providers whose products lack responsible design.

The use of AI, specifically generative AI, is going to have a substantial impact on society at large (if it doesn't already). Any businesses that use it (or considering doing so) are going to have to face the problems and risks it brings with it (including ethical dilemmas). These considerations/pitfalls are detailed in the Toptal Engineering "5 Pillars ..." article, but I am not going to reproduce them here; this post is already very long.

If you are interested in reading more on this topic or want more detail, I strongly encourage you to read the articles linked/referenced in the "Resources" section. Right now, I'm off to fight the continuing battle of honing my existing skills and learning new ones. It's the only way to stay relevant in a rapidly-changing industry if one works in the private sector.

Thumbnail image copyright Toptal Engineering

Regulation and Society adoption

Ждем новостей

Нет новых страниц

Следующая новость