Our new Data Protection and Privacy Support Portal "PrivacyAssist" in now available. Learn More!

Cybersecurity Trends for 2024 

Blogpost image for "Cybersecurity Trends for 2024" by PrivacyEngine.

    Need world class privacy tools?

    Schedule a Call >

    As digital threats evolve, staying ahead of the curve is crucial for both individuals and organisations. In this article, we delve into the latest cybersecurity trends, uncovering insights and strategies essential for your digital safety. Drawing on expert research and predictions from PrivacyEngine, we bring you a concise yet comprehensive overview of what to expect and how to prepare for the cybersecurity challenges of the coming year. Get ready to empower yourself with knowledge and stay one step ahead in the digital world. 

    AI (Artificial Intelligence) and Cybersecurity 

    Artificial intelligence (AI) is playing an increasingly significant role in both cyberattacks and cyber defences. On the one hand, cybercriminals are using advanced AI tools and techniques to launch more sophisticated and targeted attacks, such as using AI to generate realistic phishing emails, impersonate voices or faces, or bypass biometric authentication. On the other hand, cybersecurity professionals are leveraging AI to enhance their threat detection and incident response capabilities, such as using AI to analyse vast datasets and identify patterns, anomalies or deviations that may indicate potential security threats.  

    According to Gartner, AI’s role in cybersecurity will expand to encompass automated responses and predictive analytics in 2024. It is about taking preventive measures in advance, using AI to anticipate future cyber threats by analysing historical data and current trends. Integrating AI into cybersecurity applications can improve threat detection and incident response. Previously unseen attacks can now be detected. However, AI also poses new challenges and risks for cybersecurity, such as ensuring the trustworthiness, transparency, and accountability of AI systems, protecting the data and models used by AI from tampering or theft, and dealing with the ethical and legal implications of AI decisions or actions. 

    The AI Act has been at the forefront of public discussion. PrivacyEngine has been actively involved in this area through the research of our Senior Privacy Researcher and Consultant, Dr Maria Moloney. “The AI Act has been 2.5 years in the making, and incredibly, we are still in the early stages of understanding its full impact. One thing that is certain at this time is that it is seen as hugely significant at both the EU and global levels,” states Dr Moloney. 

    With the final text of the AI Act now agreed upon, it’s anticipated that its scope will evolve to cover an expanded range of AI systems and applications, particularly those posing significant risks to human dignity, autonomy, and democracy. The Act may see future amendments to address specific areas like biometric identification, social scoring, emotion recognition, and content moderation more thoroughly. Adjustments could also be made to adapt to the dynamic nature of AI technology, potentially through mechanisms like periodic reviews or sunset clauses. Such flexibility ensures the Act remains relevant and comprehensive. Moreover, the establishment of the AI Office and the provision for delegated acts offer pathways for these necessary amendments, reflecting the adaptability required to govern the rapidly advancing field of AI effectively. 

    The risk assessment of AI systems is likely to be refined and standardised to ensure consistency and comparability across different sectors and countries. For example, the AI Act may provide more detailed criteria and indicators for determining the level of risk of an AI system, as well as more guidance and tools for conducting and documenting the risk assessment process. The risk assessment will need to be integrated with other existing frameworks and regulations, such as data protection, consumer protection, and environmental protection. 

    The governance of AI systems is likely to be strengthened and harmonised to ensure effective oversight and coordination at various levels. For example, the AI Act is set to establish a dedicated EU agency or body for AI regulation, which would be responsible for monitoring, auditing, certifying, and sanctioning AI systems and actors. The AI Act will also create a network of national authorities and competent bodies for AI regulation, which would cooperate and exchange information with each other and with the EU agency or body. The governance of AI systems will need to also involve more participation and consultation from civil society, academia, industry, and other stakeholders. 

    The AI Act’s enforcement is likely to be enhanced and diversified over time to ensure compliance and deterrence. For example, the AI Act will introduce more severe penalties and sanctions for non-compliance or violations of the AI rules, such as fines, bans, recalls, or criminal charges. The AI Act may also provide more avenues and mechanisms for redress and remedy for individuals or groups affected by harmful or unlawful AI systems or practices, such as complaints procedures, dispute resolution mechanisms, or collective actions. 

    Regulatory Expansion and Enforcement   

    One of the most significant trends in data privacy is the proliferation of laws and regulations that aim to protect personal data and empower consumers with more rights and choices. According to Gartner, by the end of 2024, 75% of the world’s population will have its personal data covered under modern privacy regulations. This means that organisations will have to comply with a variety of rules and requirements across different jurisdictions, such as obtaining consent, providing transparency, enabling access and deletion, implementing data protection by design and default, and reporting breaches. 

    Some of the major privacy regulations that are expected to be finalised or enforced in 2024 include: 

    • The Digital Markets Act (DMA) and the Digital Services Act (DSA) in the European Union will impose new obligations and restrictions on large online platforms and intermediaries, such as ensuring fair competition, preventing harmful content, and safeguarding user data. 
    • The ePrivacy Regulation in the European Union will update and harmonise the rules for electronic communications and online tracking, such as requiring consent for cookies and other identifiers and limiting the processing of metadata. 
    • The Consumer Data Privacy Act (CDPA) in Virginia, the Colorado Privacy Act (CPA), and several other state-level laws in the United States will grant consumers new rights over their personal data, such as opting out of targeted advertising, requesting access and deletion, and obtaining data portability. 

    In addition to these new laws, existing regulations, such as the General Data Protection Regulation (GDPR) in the European Union and the California Consumer Privacy Act (CCPA) in the United States, will continue to be enforced by authorities and challenged by activists. Gartner predicts that large organisations’ average annual budget for privacy will exceed $2.5 million by 2024, as they will have to invest in technology, processes, and personnel to ensure compliance and avoid fines. The California Privacy Rights Act (CPRA), amending and expanding the CCPA, further strengthens existing consumer rights, introduces new ones, enforces stricter regulations for businesses, and establishes the California Privacy Protection Agency (CPPA) for enforcement. This enhanced legislation is set to take effect in March 2024, signalling a significant shift towards more robust data protection measures. 

    To combat disinformation, organisations and individuals need to be vigilant and critical of the information they consume and share online, verify the sources and facts of any claims or news stories, report any suspicious or malicious content or activity, and educate themselves and others about the risks and impacts of disinformation. 

    Zero-Trust Programs 

    Zero-trust is a security paradigm that assumes that no entity or network is inherently trustworthy, and that every request or transaction must be verified before granting access or privileges. Zero-trust aims to reduce the attack surface and prevent unauthorised access or data breaches by implementing strict policies, controls, and mechanisms across all layers of the IT environment, such as identity management, device management, network segmentation, encryption, logging, and monitoring. 

    According to Gartner, by 2026, 10% of large enterprises will have a comprehensive, mature, and measurable zero-trust program in place. A mature zero-trust implementation demands integration and configuration of multiple different components, which can become quite technical and complex. However, zero-trust can also provide significant benefits for cybersecurity, such as improving visibility, reducing complexity, enhancing user experience, enabling scalability, and supporting digital transformation. 

    To implement a successful zero-trust program, organisations need to adopt a holistic approach that covers all aspects of their IT environment, align their security strategy with their business objectives and needs, assess their current security posture and maturity level, identify their gaps and risks, prioritise their actions and investments, define their metrics and indicators for measuring progress and performance, and continuously monitor and adjust their program as needed. 

    QR Code Phishing 

    QR code phishing, also known as QRishing, is a type of cyber-attack where fraudsters trick victims into scanning malicious QR codes. These codes can lead to phishing websites that steal sensitive information or download malware onto the victim’s device. According to SecureMac, QR code phishing is a growing cybersecurity threat that is gaining popularity among cybercriminals. This technique is known as “quishing.” These attacks exploit the convenience and ubiquity of QR codes, as well as the lack of awareness and caution among users. 

    Some of the common QR code scams that have been reported are QR code email scams, where scammers send phishing emails that contain QR codes and ask the recipients to scan them to verify or update their information; QR code payment scams, where scammers place QR codes in public places to collect payments for fake services or products; QR code package scams, where scammers send unsolicited packages with QR codes and ask the recipients to scan them to return the order or get more information; QR code cryptocurrency scams, where scammers use QR codes to solicit crypto transactions or donations for fake causes or giveaways; and QR code donation scams, where scammers impersonate or create fake charities and use QR codes to collect donations from unsuspecting donors. 

    To avoid QR code phishing, cybersecurity leaders need to educate and train their users and employees on how to spot and prevent QR code fraud. Some of the tips for avoiding QR code scams are: inspecting the QR code for signs of tampering or alteration; verifying the source and legitimacy of the QR code before scanning it; using a QR code scanner app that has security features such as URL preview or malware detection; avoiding scanning unknown or unsolicited QR codes; and reporting any suspicious or malicious QR codes to the authorities or security teams.  

    Integration of Blockchain Technology in Cybersecurity 

    Blockchain technology is increasingly being recognised for its potential to enhance cybersecurity due to its decentralised nature, making it an excellent tool for securing data, preventing fraud, and ensuring integrity. In 2024, we can expect to see broader applications of blockchain in areas like securing IoT devices, identity verification, and protecting against data tampering. Its capability to offer transparent and immutable records position blockchain as a formidable defence against cyber threats. However, blockchain also faces data privacy challenges, notably concerning the right to deletion and correction of inaccurate data. Given blockchain’s immutable nature, modifying or deleting data to comply with privacy laws like the General Data Protection Regulation (GDPR) poses significant challenges. This contradiction highlights the need for innovative solutions to reconcile blockchain’s security benefits with privacy rights and regulations. 

     Privacy-Enhancing Technologies 

    Another key trend in data privacy is the development and adoption of technologies that enable and enhance privacy (privacy-enhancing technologies or PETs). These technologies aim to protect personal data from unauthorised access, use, or disclosure, while still allowing for legitimate processing and analysis. Some examples of PETs include: 

    • Encryption transforms data into an unreadable form that can only be decrypted with a key. 
    • Pseudonymisation replaces direct identifiers with artificial ones that can be linked back to the original data with a key. 
    • Anonymisation, which removes or modifies any information that can identify or re-identify an individual. 
    • Differential privacy, which adds noise or randomness to data or queries to prevent inference or linkage attacks. 
    • Federated learning allows multiple parties to train a machine learning model without sharing their raw data. 
    • Homomorphic encryption, which allows computation on encrypted data without decrypting it. 
    • Zero-knowledge proofs, allow one party to prove a statement to another party without revealing any additional information. 

    These technologies can help organisations meet their privacy obligations and reduce their risks, while also unlocking new opportunities for innovation and value creation. For instance, PETs can enable data processing and analytics that were previously impossible because of privacy or security concerns, such as multiparty data sharing, cross-border data transfers, or AI model training.  

    Consumer Awareness and Expectations 

    Consumers are increasingly becoming more aware of how their personal data is used, and in turn, we are seeing an increase in data protection expectations. As more people become aware of the potential benefits and risks of data collection and use, they will demand more transparency, control, and choice over their data. According to a survey by Usercentrics, 86% of consumers say they are more concerned about their online privacy than they were a year ago. Some of the factors that influence consumer attitudes and behaviours include: 

    • The exposure to data breaches, scandals, or abuses, such as the Facebook-Cambridge Analytica case, which revealed how personal data can be misused for political manipulation. 
    • The emergence of modern technologies, such as AI, biometrics, or IoT, which can collect and process substantial amounts of personal data, often without the user’s knowledge or consent. 
    • Enacting new regulations, such as the GDPR or the CCPA, raises the standards for data protection and grants consumers new rights and remedies. 
    • The availability of new tools and services, such as privacy browsers, VPNs, or consent management platforms, enables consumers to protect their data and exercise their rights. 

    These factors will lead consumers to seek more information and options about how their data is collected and used and to favour organisations that respect their privacy preferences and values. According to a study by Cisco, 32% of consumers have switched companies or providers over data-sharing concerns. Therefore, organisations that want to retain and attract customers will have to adopt a privacy-centric approach and offer more transparency, control, and choice over data. 

    Increased Focus on Supply Chain Security 

    The security of the supply chain is becoming a major concern. Cyberattacks on supply chains can have widespread implications, as seen in recent high-profile incidents. In 2024, there will be an increased emphasis on securing the supply chain at all levels, including vendor management, monitoring of third-party risks, and implementing robust cybersecurity standards across the supply chain. 

    Growth in Cybersecurity Insurance 

    As the frequency and severity of cyberattacks increase, so does the demand for cybersecurity insurance. In 2024, we might see more businesses opting for cyber insurance to mitigate financial risks associated with data breaches, ransomware attacks, and other cyber incidents. This trend will likely drive a more standardised approach to assessing and managing cyber risks. 

    Cybersecurity in Autonomous Vehicles 

    As autonomous vehicles become more common, their cybersecurity is becoming an increasingly prominent issue. These vehicles rely heavily on connected technologies, making them potential targets for cyberattacks. In 2024, there will be a greater focus on developing robust security protocols to protect autonomous vehicles from hacking, data theft, and other cyber threats. 

    Autonomous vehicles (AVs) are becoming more prevalent and sophisticated, offering numerous benefits such as improved safety, comfort, and efficiency. However, AVs also face significant cybersecurity challenges, as they rely on complex embedded systems, in-vehicle networks, and external connections to perform their functions. Cyberattacks on AVs can have severe consequences, such as compromising the privacy, integrity, and availability of the vehicle and its data, or even endangering the lives of passengers and pedestrians. 

    Tesla is one of the leading companies in developing and deploying AVs, with its advanced features such as Autopilot and Full Self-Driving. However, Tesla is also a prime target for hackers, who have demonstrated many ways to exploit the vulnerabilities of its systems. Some examples of cyberattacks on Tesla vehicles are: 

    • In 2016, researchers from Keen Security Lab remotely hacked a Tesla Model S and took control of its brakes, windshield wipers, door locks, and dashboard display. 
    • In 2019, researchers from Tencent’s Keen Security Lab tricked Tesla’s Autopilot system using split-second images projected on the road or on a billboard, causing the vehicle to swerve or accelerate. 
    • In 2020, researchers from McAfee found a way to manipulate the speedometer of a Tesla Model 3 by placing a small piece of tape on a speed limit sign, making the vehicle think that the limit was higher than it was. 
    • In 2021, researchers from Ben-Gurion University of the Negev demonstrated how to spoof GPS signals and make a Tesla Model 3 drive to an unintended destination. 

    These attacks show that Tesla’s AVs are highly vulnerable to various cybersecurity threats, and that the company needs to improve its security measures and practices. Some viable solutions that Tesla and other AV manufacturers can adopt are: 

    • Implementing strong encryption and authentication mechanisms for data transmission and storage 
    • Applying regular software updates and patches to fix known vulnerabilities 
    • Conducting rigorous testing and verification of AV systems and components 
    • Collaborating with security researchers and regulators to identify and mitigate potential risks 
    • Educating users and customers about the best practices for AV security 

    Cybersecurity is a crucial aspect of AV development and deployment, as it affects not only the performance and functionality of the vehicles but also the safety and trust of the users and society. Tesla and other AV companies should take cybersecurity seriously and proactively address the challenges and opportunities in this domain. 

    Emphasis on Cybersecurity Education and Awareness 

    Awareness and education are key components in the fight against cybercrime. In 2024, there will likely be a greater emphasis on educating both the public and employees in organisations about cybersecurity best practices. This includes training on recognising phishing attempts, securing personal and company data, and understanding the importance of regular software updates. 

    Collaboration and Information Sharing 

    Finally, collaboration and information sharing among organisations, cybersecurity firms, and government agencies will be crucial in combating cyber threats. Sharing information about threats, vulnerabilities, and attacks can help prepare and protect against future incidents. We can expect to see more platforms and frameworks that facilitate this kind of collaboration in 2024. 

    Try PrivacyEngine
    For Free

    Learn the platform in less than an hour
    Become a power user in less than a day

    PrivacyEngine Onboarding Screen