Close Menu
    X (Twitter)
    Blockchain Journal
    • News
      • Blockchain News
      • Bitcoin News
      • Ethereum News
      • NFT
      • DeFi News
      • Polkadot News
      • Chainlink News
      • Ripple News
      • Cardano News
      • EOS News
      • Litecoin News
      • Monero News
      • Stellar News
      • Tron News
      • Press Releases
      • Opinion
      • Sponsored
    • Price Analisys
    • Learn Crypto
    • Contact
    • bandera
    X (Twitter)
    Blockchain Journal
    Home » Artificial Intelligence: data point out that models can be manipulated using different techniques.

    Artificial Intelligence: data point out that models can be manipulated using different techniques.

    0
    By ethan on October 13, 2025 Companies
    Close-up of data scientist at console, AI brain with data flows and blockchain icons.
    Share
    Facebook Twitter LinkedIn Pinterest Email

    A couple of hundred carefully chosen bad records can degrade or steer the behavior of virtually every AI system. This threatens spam filters, medical assistants and finance bots by making them less reliable for engineers, users and regulators.

    Researchers have shown that a few hundred bad samples are enough to ruin almost any AI model, a blunt warning that highlights how little poison is needed to cause outsized damage.

    Data points to a field-wide weak spot where you do not need millions of poisoned records—just a few hundred will do. The firm calls this hidden flaw a “silent saboteur” that endangers language models, image detectors, content filters and predictive tools.

    The danger is wide, with a forecast that about three out of ten future AI cyberattacks will rely on data poisoning, model theft or adversarial samples, amplifying systemic risk across applications.

    Large language models draw extra attention because short malicious lists can trigger secret leakage, dangerous code or convincing lies, failures that erode trust and can become life-or-money problems in clinics or banks.

    Data poisoning is the act of slipping malicious examples into the training set so the final model performs worse, grows biased or obeys an attacker, turning routine learning pipelines into attack surfaces.

    Implications and defenses

    Once bosses learn that tiny attacks work, deployments stall in risk-averse fields such as health and finance. Backdoored or biased models raise the odds of a catastrophic mistake or a data leak.

    If a firm’s AI product is compromised, both share price and reputation can drop. While the threat pushes lawmakers to demand audit trails and traceability under GDPR and future rules.

    The evidence data presents forces teams to act at every stage of the AI life cycle, from the moment data enters the system to long after the model goes live. The first job is to put traceability and live monitoring in place so poisoned data is caught and removed before it harms critical applications.

    AI Featured Market
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    ethan

    Related Posts

    SharpLink earns $33 Million from Ether staking as “100% ETH” strategy pays off

    January 9, 20262 Mins Read

    OKX cuts institutional staff amid global restructuring

    January 9, 20262 Mins Read

    Stablecoin firm Rain valued at $1.95 Billion after funding round

    January 9, 20262 Mins Read

    Tether invested up to $50 million in Ledn at a $500 million valuation

    January 9, 20262 Mins Read

    BNY Mellon launches tokenized deposits as part of digital assets expansion

    January 9, 20262 Mins Read

    Bitnomial wins CFTC nod to offer prediction markets

    January 9, 20262 Mins Read

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    © 2026 Blockchain Journal

    Type above and press Enter to search. Press Esc to cancel.

    We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.