THE AI EFFECT

DISCLAIMER

The following research has been appropriately collated and sourced, with references provided throughout, while general opinions are considered ownership of the author.

The AI Effect Timeline Of EVENTS SO FAR:

November 2022

Global fast fashion brands H&M, Zara and so forth began optimising with AI tools, such as chatbot services to handle purchase requests, smart warehouses for inventory and autonomous vehicles for shipping. 

December 2022

Leading US-retailer Walmart reached its peak Black Friday season with AI supply chain solutions for inventory management through predictive analytics and trend patterns, minimising wastage, out-of-stock items to beat customer demand.

March 2023

Retailers began to take advantage of OpenAI’s ChatGPT natural language model, with Shopify, Instacart and Snapchat being some of the first adopters of its API and plugin integration.

 June 2023

Leading French grocer Carrefour, announced three tech solutions through its chatbot service ‘Hopla’: advice robot for shopping, description sheets for brand products, and purchasing support.

January 2024

AI-powered robots are gaining popularity in the UK, and Europe. 1MRobotics unveiled its autonomous storefront for pick and packing in January 2024, accompanied by courier delivery and BOPIS (Buy Online Pick In-Store).

January 2024

Walmart and Amazon both revealed AI will assist and enhance customer functionalities including product discovery, size charts and general improvement of offerings both in-store and online.

January 2024

NVIDIA suggests loss prevention can be implemented through AI solutions to combat theft and shoplifting; body cameras, security RFID tech enable monitoring

March 2024

US-based digitally driven mobile app and community platform ‘DressX’, unveiled its generative AI tool in March, available exclusively on Discord. Using easy prompts, and the ability to browse privately, DressX has been shaping the digital fashion landscape, helping to visualise designs before purchasing by projecting designs onto images uploaded by customers.

July 2024

UK-based fast-fashion powerhouse Sheerluxe introduced its brand new AI-powered influencer ‘Reem’ to its users. The brand described Reem as a fashion and lifestyle editor providing recommendations and new products to try. However, customers were not impressed, divulging how SheerLuxe is ‘depriving human journalists of a job’. SheerLuxe debunked these claims, stating Reem is part of the company experimenting with AI technology, trialing innovative ways to stay ahead of the curve.

Assessing the AI cyber crime ecosystem:

In August 2024, software company IBM released is annual ‘Cost of a Data Breach Report’, detailing how the retail industry globally is facing average costs to remediate data breaches upto USD$3.48M (GBP£2.74M), an increase of 18% in 2024 compared to 2023. However, these figures remain the lowest when considering global average across various industries exceeding GBP£4M. The top five regions saw the highest rate of data breaches including the US, Middle East, Germany, Italy and Benelux, while Canada and Japan experienced a reduction in costs.

Alarmingly, the report discovered one-third of retailers are now using AI to automate tasks, an increase from 25% in 2023.

While organisation’s rush to secure their assets through AI, an estimated 24% of AI models themselves are not secure. Security vulnerabilities are said to be the biggest concern, with cyber criminals threatening to take advantage of Large Language Models (a.k.a. LLMs) in an effort to obtain sensitive company and personal data inputted by employees. Likewise, Generative AI is playing a role in enhancing phishing attacks by allowing non-English speakers to produce grammatically correct and coherent phishing messages.

Coinciding, there is a general lack of cyber security training amongst employees, with the skills gaps increasing year-on-year, totalling around 3.5M unfilled jobs by 2025.


By design phishing is a type of social engineering scheme, whereby a threat actor will craft an email, or SMS message (a.k.a. smishing) to steal sensitive data from its targets. Going a step further, the email content can be specifically pertaining to a target individual or organisation (a.k.a. spear phishing) in order to extract specific company details, financial records, or funds transferred to the attacker-controlled environment. This information can then be used in future campaigns possibly against clients and customers.

Within the phishing email, a redirect URL link to a login page/website or PDF document will be attached, and once clicked will ask for credentials, or directly download malware onto the target device, infecting the machine and allowing the threat actor a way into the organisation. The impact for individuals could be loss of credentials (username and password), and financial data (credit card information). For organisations impacted, loss of customer/client trust, alongside confidential documents and records will result in reputational harm.

RECOMMENDATIONS

If you open a phishing email and click on any links attached, if accessing from a corporate device, you can report to your organisation straight away, and through the 'report phishing' button for Outlook users. Alternatively, blocking the sender, visiting the legitimate site for confirmation, reporting to Action Fraud if accessing through a personal account, and setting up an email gateway to filter emails before they reach your inbox can help.

Recently, Apple's AI Intelligence has been marking emails as a priority, Apple released its intelligence strategy which involves saving users time by alerting them about which emails should take priority, including those suspected to be phishing. The potential drawback is, pushing too many phishing emails to users may cause accidental clicking on links or downloading malicious content by mistake; especially without proper cyber security awareness training in place. The Apple Intelligence is still in beta form, and therefore teething issues can be addressed in the future.


Deepfakes

Under the EU AI Act, deployers who use AI systems to create deep fakes are required to clearly disclose that the content has been artificially created or manipulated by labelling the AI output as such and disclosing its artificial origin.

In February 2024, one of the first deepfake virtual fraud scams on a large scale took place against a financial services company, impersonating the chief financial officer (CTO) to dupe an employee into transferring large funds into the attacker-controlled environment.

The scheme was well played, with the threat actor recreating an entirely manufactured Teams conference call, to make it appear as if the employee was in a regular call with their colleagues. While there was some initial hesitation after receiving a random message from the CTO, believing it to be a possible phishing attempt; the employee was reassured after seeing his alleged colleagues virtually, resulting in GBP£25.6M being transferred.

In retail, and fashion industries, facial recognition is used to protect user accounts from account takeover. Brute force is a type of attack vector which can compromise credentials and identify a person’s username and password to access login. However, authentication methods such as multi-factor (MFA) and facial recognition can help stop unwanted access.

Deepfake phishing, similar to the CTO fraud case, is one other type of attack vector that manipulates through social engineering victims into revealing sensitive information. The objective is to bypass security controls, and gain information to further future campaigns (client and partner data can help carry out supply chain compromise), or have an immediate impact through financial gain.

In January 2023, Italian clothing brand Cap_able made a bold move and created a collection called "Manifesto" which it designed to evade AI facial recognition detection through making it believe the person wearing the clothes is an animal. The pieces were tested using a system called YOLO, and patterns merged through the Computerised Knitwear Machine.

In March 2024, China-based fast fashion brand Temu came under scrutiny for its somewhat overstepping prize giveaway, which saw the powerhouse offering a GBP£50 reward to new customers in exchange for permanent access to their data including voice and biographical information. Although no strings were attached, beside customers losing complete privacy of data, claims were made in relation to if the data were compromised, could this give rise to creation of deepfakes for malicious purposes?

Social Media Marketing Gone Wrong:

Just as we see in social engineering scams with threat actors impersonating real people to gain trust with victims and eventually steal from them financially; we’re now seeing cases of ‘AI voice cloning’ via social media platforms, opting to use a popular account from a well known person and profit from victims.

Those akin to social media, will have come to realise how platforms such as Instagram and Facebook switched up their viewing strategies, replacing the old algorithm with a domino effect “discovery-engine,” allowing users to come into contact with posts not on their explore page or previously searched for, in a bid to replicate TikTok’s “For You” page. This shift has been prevalent between 2023 and 2024, set to keep pace for the rest of this year, unless a new glitch hits users and turns all apps upside down again. Likewise, it is suggested that smaller businesses and lower-viewed accounts will have further opportunities to get featured, as Instagram in particular pushes this content over high-profile accounts.

What does all of this have to do with the threats facing AI?

Due to the algorithm changes, more AI-generated content is being pushed to unknowing users, causing a mesh of information making it harder to distinguish real from manipulated data. For example, AI-powered deepfakes are thriving through fake ads, within retail and fashion used to promote beauty and skincare products, and pose as influencers during altered shopping promotional videos.

The rise of ‘GPT’ Variants:

FleecGPT –

In May 2023, Sophos released a report titled ‘FleeceGPT mobile apps target AI-curious to rake in cash’ detailing how users were being tricked into downloading malicious apps from stores such as Android’s Google Play Store and iTunes App Store through malicious pop-up advertisements, and reduced in-app functionalities shortly after installing, requiring subscription costs.

Black Hat AI Tool Discovery  –

In July 2023, discussions on the dark web shifted to incorporate AI variants, WormGPT and FraudGPT. WormGPT, is based on GPT-6B open-source pre-trained transformer model, able to generate malicious python scripts, while offering a range of services through its domain.

XXXGPT –

In August 2023, security researchers identified hacktivist groups discussing blackhat AI tools known as XXXGPT and WolfGPT on a hacker forum on the dark web. WolfGPT is a Python-build alternative to ChatGPT that offers advanced phishing attacks and confidentiality to users, while XXXGPT provides code for botnets, point-of-sale systems, ATMs (goal = cash removal), infostealers, RATs, and malware.

Combatting AI challenges with Legislation:

EU AI Act

The AI Act is a European Union regulation establishing a common regulatory and legal framework for AI within the European Union. It will come into force on 1 August 2024, and impact high-risk AI systems which make use of techniques involving the training of models with data. Organisations using these AI systems will be expected to validate training and testing data sets to meet the new quality criteria.

If organsations do not comply with the set regulations, there will be penalties associated. The maximum penalty for non-compliance with the EU AI Act's rules on prohibited uses of AI is the higher of an administrative fine of up to EUR£35M or 7 percent of worldwide annual turnover (Art. 99(3) EU AI Act). Penalties for breaches of certain other provisions will be subject to a maximum fine of GBP£15M or 3 percent of worldwide annual turnover, whichever is higher. The maximum penalty for the provision of incorrect, incomplete, or misleading information to notified bodies or national competent authorities is GBP£7.5M or 1 percent of worldwide annual turnover. For SMEs and start-ups, the fines for all the above are subject to the same maximum percentages of amounts, but whichever is lower (Art. 99(6) EU AI Act).

EU Digital Service Act

This EU Regulation aims to provide a safer experience for everyone within the online ecosystem by urging digital services operating in the EU to improve transparency and accountability. The DSA targets online platforms, search engines, hosting services and intermediary services offering network infrastructure, combating the sale of illegal content, goods and services.

Previous
Previous

MALWARE IMPACTS FOR RETAIL AND FASHION BRANDS

Next
Next

BACK-TO-SCHOOL TRENDS / 24