The year of AI and cybercrime

The year of AI and cybercrime

2024 has been a year marked by technological advances that challenge the boundaries of privacy and data protection. With the rise of artificial intelligence (AI) and a steady increase in cyberattacks, the challenges for individuals, businesses and governments have been significant.

The big media star has been AI, constantly expanding but already consolidated as a key tool in sectors such as health, finance and entertainment. Its use has also generated logical concerns about privacy. Technologies such as facial recognition and advanced language models have raised ethical questions about the use of personal data.

One of the central issues has been the training of AI models with personal data without the explicit consent of users or in violation of copyright. In this regard, the European Union has begun to legislate, but it is only just beginning and it is difficult to limit the abuse of these technologies, among other things because the opacity of how many algorithms work makes transparency and accountability difficult.

The other big hit of the year has been the constant presence in the media of all kinds of cybercrime episodes. Phishing, ransomware and malware have continued to be prominent threats. According to recent reports, ransomware attacks have increased by 30% compared to the previous year, with a particularly severe impact on small and medium-sized businesses. These attacks usually involve the encryption of sensitive data, followed by ransom demands to regain access.

Phishing has also evolved, leveraging more sophisticated techniques that mimic official communications almost perfectly. Meanwhile, malicious code has found new ways to infiltrate devices, from seemingly innocuous mobile apps to malicious links shared on social media.

If we look at the regulatory issue, 2024 has been fruitful. The European Union has implemented the Artificial Intelligence Act (AI Act), which establishes criteria for the development and use of this technology while seeking to ensure that AI systems are safe, transparent and respect the fundamental rights of European citizens.

In addition, it has updated the General Data Protection Regulation (GDPR), with a focus on the management of sensitive data and strengthening users' rights vis-à-vis large technology platforms. For example, by introducing more severe sanctions for companies that do not comply with these regulations. And finally, we highlight the Digital Resilience Law, which obliges companies in the financial sector to implement robust cybersecurity measures to protect users' personal data from cyberattacks.

In conclusion, challenges persist and digital security education remains an outstanding issue for both individuals and companies.

Related articles

Scroll to Top