According to HubSpot, 87% of businesses indicate social media is important to staying competitive. We don’t often write about social media on this website, but we do know how many privacy, public relations, and other concerns the corporate world has surrounding it. More and more, IT leaders are getting involved in marketing conversations — likewise, marketing is getting involved with technology decisions.
“Marketers were some of the first people to use analytics. But now that you can apply data mining and other analytical technology, we’ve seen increases in online sales and customer retention rates,” says Saundra Merollo, senior sales engineer at Sharp Electronics. “When you leverage AI and consumer personalization, it frees up your customers’ time.”
Yet, there are many, many concerns that social media in the enterprise world have brought about. A perfect summation of those concerns is found in the recent documentary, The Social Dilemma.
In it, former high-ranking employees from giant technology and social media companies like Twitter, Pinterest, Facebook, Firefox, and Google come together to explain the ethical concerns of social media technology, user data collection, and the sociological effects of using those technologies to drive only revenue (and not actual fact sharing).
Two key points from The Social Dilemma include:
- the notifications sent to users on social media platforms have an agenda; they’re not just there to keep users connected to those they follow, they’re there to keep them coming back to the platforms themselves (the algorithm isn’t designed for the user, it’s designed for the platform to make money off the user)
- user internet usage is being tracked to incredibly-specific detail in order to deliver targeted advertisements
As Catherine Wight points out in their Medium article, “the drug industry and social media industry are the only industries that call their customers ‘Users.’ Let that sink in a moment.”
“The psychology of persuasion is built into the AI technology today. The goal of social media is to get you to take action, use up all of your attention, and to intermittently reinforce these behaviors with rewards that will give you dopamine hits to the point of addiction.”
As one particularly memorable line in the documentary says: if you’re not paying for the product, you are the product.
Consumer concerns have power over corporations
As a slew of bloggers and thinkpiece writers drew further attention to the documentary’s points, we’ve seen some companies — including some who produce platforms which use AI — produce statements which outline what exactly they do with personal data.
Social media marketing company JSMM released a stance which expressed concern over how excessive social media use puts children in danger, referring to the documentary’s link between tween suicide rates and online activity/cyberbullying.
Ethos, another marketing agency, stressed in their statement that bad actors on social media don’t speak for the entirety of the tech industry.
“The film paints the picture that advertising on social media is used by bad actors such as foreign powers or domestic extremist groups that want to harm our democracy… There certainly are ethical implications to advertising on social media, but most of the agencies and brands out there do not have bad intentions. We take on clients that align with our own values, and we advertise in a way that considers the interests of the customer consuming the content.”
It is clear that consumers will eventually make their displeasure over not knowing how their data is used known to those who use it. But the question for IT departments, CIOs, CEOs, and other company leaders everywhere is: how will you choose to respond?
As the line between data collecting and privacy invasion grows thin and the use of AI widens across industries, IT departments should consider putting out ethics statements so that their end users and customers know precisely how their data is being used.
“With artificial intelligence tools like Google’s TensorFlow and sci-kit learn, as well as “ML-as-a-service” products, it’s never been easier for companies of all sizes to harness the power of data,” Merollo says.
A Gartner poll of roughly 200 business and IT professionals revealed that 24% of respondents’ organizations increased their artificial intelligence (AI) investments and 42% kept them unchanged since the onset of Covid-19 – namely, customer experience and retention, and revenue growth.
Over the course of the next six-nine months, 75% of respondents will continue or start new AI initiatives as they move into the Renew phase of their organization’s post-pandemic Cost Optimization, Customer Experience, and Revenue Growth are Top Focus Areas for AI Initiative.
But whether you’re using machine learning for smart building applications, digital marketing, or tracking customer experience, it may be worth the time to survey your company for its communication between you and your customers. Do they understand fully how their information is used?
“Designing AI to be trustworthy requires creating solutions that reflect ethical principles and timeless values,” Merollo says.
Therefore, Microsoft is operates with six principles that all AI systems need follow:
- Reliability & Safety, and Privacy & Security – “These are at the center of the diagram because these are areas where Microsoft has already made great advances,” she says. “The goal now is to apply those values while advancing AI systems.”
- Fairness – “AI systems should be designed to treat people fairly and avoid bias.”
- Inclusiveness – “This is on the opposite side. AI systems have the potential to create a great digital divide, or to narrow that divide which already exists. Microsoft believes that AI should be used to make computer systems more accessible.”
- Transparency and accountability – “These principles are foundational to the rest.”
- Transparency – “This means that AI systems should be understandable. Documenting data sources is important to identifying problematic recommendations and learned behavior.”
- Accountability – “Holds designers and developers ultimately accountable for how their AI systems are used. Organizational safeguards are necessary ensure that people consider and protect the individuals affected by these systems. Building trust through responsible AI principles is imperative as the technology becomes a part of the products and services that people use every day.”
To conclude, here is a resource that you could use to make better decisions for your organization. We’re sharing it on the assumption that you’ll view the power you have with customer data with a very serious, ethical lens: