Cybersecurity
-
Securing AI Agents: Implementing Role-Based Access Control for Industrial Applications
Source: arXiv AI PapersRead more: Securing AI Agents: Implementing Role-Based Access Control for Industrial ApplicationsThis paper discusses the evolution of Large Language Models (LLMs) and their integration into AI agents to enhance decision-making across…
-
AMLNet: A Knowledge-Based Multi-Agent Framework to Generate and Detect Realistic Money Laundering Transactions
Source: arXiv AI PapersRead more: AMLNet: A Knowledge-Based Multi-Agent Framework to Generate and Detect Realistic Money Laundering TransactionsAMLNet is a newly presented framework aimed at advancing anti-money laundering research through synthetic transaction generation and a detection ensemble.…
-
Adapting and Evaluating Multimodal Large Language Models for Adolescent Idiopathic Scoliosis Self-Management: A Divide and Conquer Framework
Source: arXiv AI PapersRead more: Adapting and Evaluating Multimodal Large Language Models for Adolescent Idiopathic Scoliosis Self-Management: A Divide and Conquer FrameworkThis study evaluates the effectiveness of Multimodal Large Language Models (MLLMs) in managing Adolescent Idiopathic Scoliosis (AIS). It highlights the…
-
When Safe Unimodal Inputs Collide: Optimizing Reasoning Chains for Cross-Modal Safety in Multimodal Large Language Models
Source: arXiv AI PapersRead more: When Safe Unimodal Inputs Collide: Optimizing Reasoning Chains for Cross-Modal Safety in Multimodal Large Language ModelsMultimodal Large Language Models (MLLMs) face risks from implicit reasoning, where harmless inputs may create dangerous outputs. A new dataset…
-
JustEva: A Toolkit to Evaluate LLM Fairness in Legal Knowledge Inference
Source: arXiv AI PapersRead more: JustEva: A Toolkit to Evaluate LLM Fairness in Legal Knowledge InferenceJustEva is an open-source toolkit aimed at assessing the fairness of Large Language Models (LLMs) in legal tasks. The study…
-
AegisShield: Democratizing Cyber Threat Modeling with Generative AI
Source: arXiv AI PapersRead more: AegisShield: Democratizing Cyber Threat Modeling with Generative AIAegisShield automates threat modeling for small organizations, integrating advanced AI to simplify the process. It utilizes frameworks such as STRIDE…
-
LogGuardQ: A Cognitive-Enhanced Reinforcement Learning Framework for Cybersecurity Anomaly Detection in Security Logs
Source: arXiv AI PapersRead more: LogGuardQ: A Cognitive-Enhanced Reinforcement Learning Framework for Cybersecurity Anomaly Detection in Security LogsThis study introduces LogGuardQ, a novel reinforcement learning framework designed for enhanced detection of anomalies in dynamic environments. By integrating…
-
A Comparative Benchmark of Federated Learning Strategies for Mortality Prediction on Heterogeneous and Imbalanced Clinical Data
Source: arXiv AI PapersRead more: A Comparative Benchmark of Federated Learning Strategies for Mortality Prediction on Heterogeneous and Imbalanced Clinical DataThis study benchmarks five federated learning strategies to predict in-hospital mortality using the MIMIC-IV dataset. It finds that regularization-based strategies…
-
EchoLeak: The First Real-World Zero-Click Prompt Injection Exploit in a Production LLM System
Source: arXiv AI PapersRead more: EchoLeak: The First Real-World Zero-Click Prompt Injection Exploit in a Production LLM SystemThis study examines the EchoLeak vulnerability, a significant security issue in Microsoft 365 Copilot. The flaw allows remote data exfiltration…
-
Robust DDoS-Attack Classification with 3D CNNs Against Adversarial Methods
Source: arXiv AI PapersRead more: Robust DDoS-Attack Classification with 3D CNNs Against Adversarial MethodsA new method leveraging 3D convolutional neural networks for classifying DDoS traffic has shown significant improvements in accuracy. By utilizing…








