DOGE Worker’s Code Supports NLRB Whistleblower

A whistleblower at the National Labor Relations Board (NLRB) alleged last week that denizens of Elon Musk’s Department of Government Efficiency (DOGE) siphoned gigabytes of data from the agency’s sensitive case files in early March. The whistleblower said accounts created for DOGE at the NLRB downloaded three code repositories from GitHub. Further investigation into one of those code bundles shows it is remarkably similar to a program published in January 2025 by Marko Elez, a 25-year-old DOGE employee who has worked at a number of Musk’s companies.

A screenshot shared by NLRB whistleblower Daniel Berulis shows three downloads from GitHub.

According to a whistleblower complaint filed last week by Daniel J. Berulis, a 38-year-old security architect at the NLRB, officials from DOGE met with NLRB leaders on March 3 and demanded the creation of several all-powerful “tenant admin” accounts that were to be exempted from network logging activity that would otherwise keep a detailed record of all actions taken by those accounts.

Berulis said the new DOGE accounts had unrestricted permission to read, copy, and alter information contained in NLRB databases. The new accounts also could restrict log visibility, delay retention, route logs elsewhere, or even remove them entirely — top-tier user privileges that neither Berulis nor his boss possessed.

Berulis said he discovered one of the DOGE accounts had downloaded three external code libraries from GitHub that neither NLRB nor its contractors ever used. A “readme” file in one of the code bundles explained it was created to rotate connections through a large pool of cloud Internet addresses that serve “as a proxy to generate pseudo-infinite IPs for web scraping and brute forcing.” Brute force attacks involve automated login attempts that try many credential combinations in rapid sequence.

A search on that description in Google brings up a code repository at GitHub for a user with the account name “Ge0rg3” who published a program roughly four years ago called “requests-ip-rotator,” described as a library that will allow the user “to bypass IP-based rate-limits for sites and services.”

The README file from the GitHub user Ge0rg3’s page for requests-ip-rotator includes the exact wording of a program the whistleblower said was downloaded by one of the DOGE users. Marko Elez created an offshoot of this program in January 2025.

“A Python library to utilize AWS API Gateway’s large IP pool as a proxy to generate pseudo-infinite IPs for web scraping and brute forcing,” the description reads.

Ge0rg3’s code is “open source,” in that anyone can copy it and reuse it non-commercially. As it happens, there is a newer version of this project that was derived or “forked” from Ge0rg3’s code — called “async-ip-rotator” — and it was committed to GitHub in January 2025 by DOGE captain Marko Elez.

The whistleblower stated that one of the GitHub files downloaded by the DOGE employees who transferred sensitive files from an NLRB case database was an archive whose README file read: “Python library to utilize AWS API Gateway’s large IP pool as a proxy to generate pseudo-infinite IPs for web scraping and brute forcing.” Elez’s code pictured here was forked in January 2025 from a code library that shares the same description.

A key DOGE staff member who gained access to the Treasury Department’s central payments system, Elez has worked for a number of Musk companies, including X, SpaceX, and xAI. Elez was among the first DOGE employees to face public scrutiny, after The Wall Street Journal linked him to social media posts that advocated racism and eugenics.

Elez resigned after that brief scandal, but was rehired after President Donald Trump and Vice President JD Vance expressed support for him. Politico reports Elez is now a Labor Department aide detailed to multiple agencies, including the Department of Health and Human Services.

“During Elez’s initial stint at Treasury, he violated the agency’s information security policies by sending a spreadsheet containing names and payments information to officials at the General Services Administration,” Politico wrote, citing court filings.

KrebsOnSecurity sought comment from both the NLRB and DOGE, and will update this story if either responds.

The NLRB has been effectively hobbled since President Trump fired three board members, leaving the agency without the quorum it needs to function. Both Amazon and Musk’s SpaceX have been suing the NLRB over complaints the agency filed in disputes about workers’ rights and union organizing, arguing that the NLRB’s very existence is unconstitutional. On March 5, a U.S. appeals court unanimously rejected Musk’s claim that the NLRB’s structure somehow violates the Constitution.

Berulis’s complaint alleges the DOGE accounts at NLRB downloaded more than 10 gigabytes of data from the agency’s case files, a database that includes reams of sensitive records including information about employees who want to form unions and proprietary business documents. Berulis said he went public after higher-ups at the agency told him not to report the matter to the US-CERT, as they’d previously agreed.

Berulis told KrebsOnSecurity he worried the unauthorized data transfer by DOGE could unfairly advantage defendants in a number of ongoing labor disputes before the agency.

“If any company got the case data that would be an unfair advantage,” Berulis said. “They could identify and fire employees and union organizers without saying why.”

Marko Elez, in a photo from a social media profile.

Berulis said the other two GitHub archives that DOGE employees downloaded to NLRB systems included Integuru, a software framework designed to reverse engineer application programming interfaces (APIs) that websites use to fetch data; and a “headless” browser called Browserless, which is made for automating web-based tasks that require a pool of browsers, such as web scraping and automated testing.

On February 6, someone posted a lengthy and detailed critique of Elez’s code on the GitHub “issues” page for async-ip-rotator, calling it “insecure, unscalable and a fundamental engineering failure.”

“If this were a side project, it would just be bad code,” the reviewer wrote. “But if this is representative of how you build production systems, then there are much larger concerns. This implementation is fundamentally broken, and if anything similar to this is deployed in an environment handling sensitive data, it should be audited immediately.”

Further reading: Berulis’s complaint (PDF).

Krebs on Security – ​Read More

North Korean Operatives Use Deepfakes in IT Job Interviews

Use of synthetic identities by malicious employment candidates is yet another way state-sponsored actors are trying to game the hiring process and infiltrate Western organizations.

darkreading – ​Read More

Popular British Retailer Marks & Spencer Addresses ‘Cyber Incident’

M&S has launched an investigation and said some customer operations are impacted.

darkreading – ​Read More

Millions impacted by data breaches at Blue Shield of California, mammography service and more

Blue Shield of California said an improper Google Analytics configuration exposed the data of more than 4.5 million people, while state regulators recently received more than a dozen other reports involving healthcare-related organizations.

The Record from Recorded Future News – ​Read More

Amazon’s SWE-PolyBench just exposed the dirty secret about your AI coding assistant

Credit: VentureBeat made with Midjourney


Amazon launches SWE-PolyBench, a groundbreaking multi-language benchmark that exposes critical limitations in AI coding assistants across Python, JavaScript, TypeScript, and Java while introducing new metrics beyond simple pass rates for real-world development tasks.Read More

Security News | VentureBeat – ​Read More

What is slopsquatting, and how to protect your organization

AI-generated code is already widespread — by some estimates around 40% of new code this past year was written by AI. Microsoft CTO Kevin Scott predicts that in five years this figure will hit 95%. How to properly maintain and protect such code is a burning issue.

Experts still rate the security of AI code as low, as it’s teeming with all the classic coding flaws: vulnerabilities (SQL injections, embedded tokens and secrets, insecure deserialization, XSS), logical defects, outdated APIs, insecure encryption and hashing algorithms, no handling of errors and incorrect user input, and much more. But using an AI assistant in software development adds another unexpected problem: hallucinations. A new study examines in detail how large language models (LLMs) create hallucinations that pop up in AI code. It turns out that some third-party libraries called by AI code simply don’t exist.

Fictitious dependencies in open-source and commercial LLMs

To study the phenomenon of phantom libraries, the researchers prompted 16 popular LLMs to generate 576,000 Python and JavaScript code samples. The models showed varying degrees of imagination: GPT4 and GPT4 Turbo hallucinated the least (fabricated libraries were seen in less than 5% of the code samples); next came DeepSeek models (more than 15%); while CodeLlama 7B was the most fantasy-prone (more than 25%). What’s more, even the parameters used in LLMs to control randomness (temperature, top-p, top-k) are unable to reduce the hallucination rate to insignificant values.

Python code contained fewer fictitious dependencies (16%) than JavaScript (21%). Age is also a contributing factor. Generating code using packages, technologies and algorithms that started trending only this past year results in 10% more non-existent packages.

But the most dangerous aspect of phantom packages is that their names aren’t random, and neural networks reference the same libraries over and over again. That was demonstrated by stage two of the experiment, in which the researchers selected 500 prompts that had provoked hallucinations, and re-ran each of them 10 times. This revealed that 43% of hallucinated packages crop up during each code generation run.

Also of interest is the naming of hallucinated packages: 13% were typical “typos” that differed from the real package name by only one character; 9% of package names were borrowed from another development language (Python code, npm packages); and a further 38% were logically named but differed more significantly from the real package names.

Meet slopsquatting

All of the can provoke a new generation of attacks on open-source repositories, which has already been dubbed “slopsquatting” by analogy with typosquatting. In this case, squatting is made possible not by names with typos, but by names from AI slop (low-quality output). Because AI-generated code repeats package names, attackers can run popular models, find recurring hallucinated package names in the generated code, and publish real — and malicious — libraries with these same names. If someone mindlessly installs all packages referenced in the AI-generated code, or the AI assistant installs the packages by itself, a malicious dependency gets injected into the compiled application, exposing the supply chain to a full-blown attack (ATT&CK T1195.001). This risk is set to rise significantly with the advance of vibe coding — where the programmer writes code by giving instructions to AI with barely a glance at the actual code produced.

Given that all major open-source repositories have been hit by dozens of malicious packages this past year (1, 2), and close to 20,000 malicious libraries have been discovered in the same time period, we can be sure that someone out there will try to conveyorize this new type of attack. This scenario is especially dangerous for amateur programmers, as well as for corporate IT departments that solve some automation tasks internally.

How to stop slopsquatting and use AI safely

Guidelines on the safe implementation of AI in development already exist (for example, OWASP, NIST and our own), but these tend to describe a very broad range of measures, many of which are long and complicated to implement. Therefore, we’ve compiled a small subset of easy-to-implement measures to address the specific problem of hallucinated packets:

  • Make source-code scanning and static security testing part of the development pipeline. All code, including AI-generated, must meet clear criteria are: no embedded tokens or other secrets; use of correct versions of libraries and other dependencies, and so forth. These tasks are well integrated into the CI/CD cycle — for example, with the help of our Kaspersky Container Security.
  • Introduce additional AI validation cycles where the LLM checks its own code for errors, to reduce the number of hallucinations. In addition, the model can be prompted to analyze the popularity and usability of each package referenced in a project. Using a prebuilt database of popular libraries to fine-tune the model and allow retrieval-augmented generation (RAG) also reduces the number of errors. By combining all these methods, the authors of the study were able to cut the number of hallucinated packages to 2.4% for DeepSeek and 9.3% for CodeLlama. Unfortunately, both figures are too far off zero for these measures to suffice.
  • Ban the use of AI assistants in coding critical and trusted components. For non-critical tasks where AI-assisted coding is allowed, assign a component developer to build a code review process. For the review, there needs to be a checklist tailored to AI code.
  • Draw up a fixed list of trusted dependencies. AI assistants and their flesh-and-blood users must have limited scope to add libraries and dependencies to the code — ideally, only libraries from the organization’s internal repository, tested and approved in advance, should be available.
  • Train developers. They must be well versed in AI security in general, as well as in the context of AI use in code development.

Kaspersky official blog – ​Read More

Cloudflare: Government-backed internet shutdowns plummet to zero in first quarter

Governments around the world have appeared to ease off from using internet shutdowns to silence protesters and control access to information, according to new data from internet infrastructure company Cloudflare.

The Record from Recorded Future News – ​Read More

Japan Warns on Unauthorized Stock Trading via Stolen Credentials

Attackers are using credentials stolen via phishing websites that purport to be legitimate securities company homepages, duping victims and selling their stocks before they realize they’ve been hacked.

darkreading – ​Read More

From friction to flow: Why Swissport scrapped its VPN maze for Cato’s SASE fabric


Swissport ditches legacy tech, deploying a global, Zero Trust SASE architecture with Cato Networks securing 26,000 users, unlocking real-time control.Read More

Security News | VentureBeat – ​Read More

AuthMind Raises $19.3 Million in Seed Funding

Identity protection startup AuthMind has announced raising $19.3 million in a seed funding round led by Cheyenne Ventures.

The post AuthMind Raises $19.3 Million in Seed Funding appeared first on SecurityWeek.

SecurityWeek – ​Read More