December 30 ~ 31, 2023, Virtual Conference
Daniel Wankit Yip, Aysan Esmradi and Chun Fai Chan, Logistics and Supply Chain MultiTech R&D Centre, Level 11, Cyberport 2, 100 Cyberport Road, Hong Kong
Prompt injection attacks exploit vulnerabilities in Large Language Models (LLMs) to manipulate the model into unintended actions or generate malicious content. As LLM-integrated applications gain wider adoption, they face growing susceptibility to such attacks. This study introduces a novel evaluation framework for quantifying the resilience of applications. To ensure the representativeness of simulated attacks on the application, a meticulous selection process was employed, resulting in 115 carefully chosen attacks based on coverage and relevance. For enhanced interpretability, a second LLM was utilized to evaluate the responses generated from these simulated attacks. Unlike conventional malicious content classifiers that provide only a confidence score, this approach produces a score accompanied by an explanation, thereby enhancing interpretability. Subsequently, a resilience score is computed by assigning higher weights to attacks with greater impact, thus providing a robust measurement of resilience. Overall, the framework empowers organizations to make well-informed decisions against potential threats.
Large Language Model, Prompt Injection, Cyber Security.
Aysan Esmradi, Daniel Wankit Yip and Chun Fai Chan, Logistic and Supply Chain MultiTech R&D Centre (LSCM)
Ensuring the security of large language models (LLMs) is an ongoing challenge despite their widespread popularity. Developers work to enhance LLMs security, but vulnerabilities persist, even in advanced versions like GPT-4. Attackers exploit these weaknesses, highlighting the need for proactive cybersecurity measure in AI model development. This article explores two attack categories: attacks on models themselves and attacks on model applications. The former requires expertise, access to model data, and significant implementation time, while the latter is more accessible to attackers and has seen increased attention. Our study reviews over 100 recent research works, providing an in-depth analysis of each attack type. We identify the latest attack methods and explore various approaches to carry them out. We thoroughly investigate mitigation techniques, assessing their effectiveness and limitations. Furthermore, we summarize future defences against these attacks. We also examine real-world techniques, including reported and our implemented attacks on LLMs, to consolidate our findings. Our research highlights the urgency of addressing security concerns and aims to enhance the understanding of LLM attacks, contributing to robust defence development in this evolving domain.
Large Language Models, Cybersecurity Attacks.
Kelvin Ovabor1 and Travis Atkison2, 1Department of Computer Science, University of Alabama, USA, 2Department of Computer Science, University of Alabama, USA
The ability to effectively implement user-centric privacy controls in cloud-based identity access management (IAM) systems is crucial in today's age of rapidly rising data and increased privacy concerns. The study tackles the scalability issue inside cloud-based IAM systems, where user-centric privacy controls are paramount. The study aims to guarantee effective system performance despite growing numbers of users and data items by following a carefully crafted approach that uses user- centric privacy algorithms. The findings are expected to increase scalability while maintaining security and user privacy, significantly improving current cloud security and IAM techniques. This study provides significant findings for businesses adapting to the changing environment of cloud-based access and identity management, enhancing the security and privacy aspects of the online environment.
Cloud-based System, Identity Management, Access Control, Security, user-centric privacy.
Heena Sah , Abeba N. Turi, University Canada West, Northeastern University, Canada
The boom in Artificial Intelligence has had several impacts on tech businesses globally. Technology-driven startups have catalyzed their future road maps by implementing AI systems that are fair, comprehensible, reliable, and secure to grow business productivity in the next few years. This study examines the effects of AI on the transformation of tech-entrepreneurship careers and on-demand technical skills for achieving career purposes and tasks, providing a competent approach to the tech industry in the rapidly involving digital landscape. It also identifies the role and impact of business intelligence software/applications on tech ecosystems in next-gen tech entrepreneurship. The studyasserts that the rapidly growing AI-driven businesses require revolutionary tech ecosystems and receptive tech-literate entrepreneurs with robot-resistant skills who excel in the opportunities brought by AI technologies beyond the threat.
Tech Industry, Tech Entrepreneurship, Artificial Intelligence, Business Intelligence, Next Generation, Education, AI Talent.
Fernando Ferreira Fernandez and Abeba N. Turi, University Canada West, Vancouver, British Columbia, Canada
This study presents a novel approach to Open Innovation (OI) as it applies to small and medium companies (SMSCs) suffering from multilayer constraints to benefit from such a collective tech value creation model. Building on the decades-long practice of OI, the chapter looked into the model's evolution, development, and application constraints for the SMSCs and presented a refined concept note that meets the dynamic business and tech environment. Based on this, an OI model that encompasses different stakeholders is designed. The proposed IO model that applies to the SMSCs is built on the Consortium model principles that enable ease of entry and exit for each stakeholder, keeping members' best interest for the common good.
Open Innovation, Small and Medium Scale Companies, Collaborative Research, Disruptive Technology, Competitive Differential