Cisco Warns of Attacks Exploiting Decade-Old ASA Vulnerability

Cisco has updated an advisory for CVE-2014-2120 to warn customers that the vulnerability has been exploited in the wild. 

The post Cisco Warns of Attacks Exploiting Decade-Old ASA Vulnerability appeared first on SecurityWeek.

SecurityWeek – ​Read More

NachoVPN Tool Exploits Flaws in Popular VPN Clients for System Compromise

Cybersecurity researchers have disclosed a set of flaws impacting Palo Alto Networks and SonicWall virtual private network (VPN) clients that could be potentially exploited to gain remote code execution on Windows and macOS systems.
“By targeting the implicit trust VPN clients place in servers, attackers can manipulate client behaviours, execute arbitrary commands, and gain high levels of access

The Hacker News – ​Read More

US agency proposes new rule blocking data brokers from selling Americans’ sensitive personal data

The U.S. consumer protection agency said it’s closing the loophole to block the “widespread evasion” of federal law by data brokers.

© 2024 TechCrunch. All rights reserved. For personal use only.

Security News | TechCrunch – ​Read More

North Korean Kimsuky Hackers Use Russian Email Addresses for Credential Theft Attacks

The North Korea-aligned threat actor known as Kimsuky has been linked to a series of phishing attacks that involve sending email messages that originate from Russian sender addresses to ultimately conduct credential theft.
“Phishing emails were sent mainly through email services in Japan and Korea until early September,” South Korean cybersecurity company Genians said. “Then, from mid-September,

The Hacker News – ​Read More

US government contractor ENGlobal says operations are ‘limited’ following cyberattack

ENGlobal Corporation, a provider of engineering and automation services to the U.S. energy sector and federal government, says it has restricted access to its IT systems following a cyberattack, limiting the company to essential business operations only. In an 8-K filing with the SEC on Monday, Texas-based ENGlobal said it became aware of a “cybersecurity […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Security News | TechCrunch – ​Read More

From Web Development to Cybersecurity – A Decade of Lessons

After a decade of building web applications, transitioning to cybersecurity has been an eye-opening journey. Since cybersecurity domain is vast, and roles here are diverse: digital forensics, security analyst, security engineer, pentester, application security engineer, security architect, and the list goes on depending on the size of the company and how the roles and responsibilities are shaped.

Source

TechSplicer – ​Read More

Harvesting GitLab Pipeline Secrets

TLDR: Scan GitLab job logs for credentials using https://github.com/CompassSecurity/pipeleak

Many organizations use (self-hosted) GitLab instances to manage their source code and a lot of infrastructure is managed in code (IaC), thus these configurations must be source-controlled as well, putting a lot of responsibility on the source code platform in use. Often deployments are automated using CI/CD pipeline jobs. Each of these jobs has a log that users can access, and it is usually public. As these projects must handle secrets securely there is a lot of room for configuration errors.

CI/CD Variable Handling

Usually, developers configure their secret variables in the GitLab CI/CD Variables Settings. For each variable, there are multiple security-related configuration options.

Visibility

  • Visible: The variable is visible in the job output log.
  • Masked: Masked in job logs but value can be revealed in CI/CD settings. Requires values to meet regular expression requirements
  • Masked and hidden: Masked in job logs, and can never be revealed in the CI/CD settings after the variable is saved.

Flags

  • Protect variable: Export variable to pipelines running on protected branches and tags only.

Setting these as restrictive as possible is crucial as the job logs are usually public.

Many reasons exist why credentials might be leaked in the job output. Moreover, it is important to review generated artifacts as well. It is possible that credentials are not logged in the output but later saved in job artifacts, that can be downloaded.

Typical Job Misconfigurations

The most obvious misconfiguration is logging sensitive values in the job output e.g. for debugging purposes and not protecting the values as described above. The following examples are real-world scenarios of job outputs.

# Example 0 - Echoing Google Cloud Credentials, variations of this include commands e.g. `printenv`, `env` etc.

$ mkdir -p ./creds
$ echo $GCLOUD_SERVICE_KEY | base64 -d > ./creds/serviceaccount.json
$ echo $GCLOUD_SERVICE_KEY
[cut by compass]
$ cat ./creds/serviceaccount.json
{
 "type": "service_account",
 "project_id": "[redacted by compass]",
 "private_key_id": "[redacted by compass]",
 "private_key": "-----BEGIN PRIVATE KEY-----[redacted by compass]-----END PRIVATE KEY-----n",
 "client_email": "[redacted by compass].iam.gserviceaccount.com",
 "client_id": "[redacted by compass]",
 "auth_uri": "https://accounts.google.com/o/oauth2/auth",
 "token_uri": "https://oauth2.googleapis.com/token",
 "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
 "client_x509_cert_url": "[redacted by compass]",
 "universe_domain": "googleapis.com"
}
$ terraform init
Initializing the backend...
Successfully configured the backend "[redacted by compass]"! Terraform will automatically
use this backend unless the backend configuration changes.

# Example 1 - Echoing the SSH private key
$ git remote set-url origin "${CI_REPOSITORY_URL}"
Executing "step_script" stage of the job script
$ eval $(ssh-agent -s)
Agent pid 13
$ echo "$PRIVATE_KEY"
-----BEGIN OPENSSH PRIVATE KEY-----
[redacted by compass]

Another common issue is setting e.g. a tool’s verbosity to a debug level and not realizing that it does start to log sensitive values.

# Example 3 - SSH session used to deploy a setup using Docker and leaking the command line environment variables

debug1: Sending command: docker run -d --name my-api -e DATABASE_URL=postgresql://postgres:[redacted by compass]@postgres.[redacted by compass]rds.amazonaws.com/postgres -p 80:80 registry.gitlab.com/[redacted by compass]/api
debug1: channel 0: free: client-session, nchannels 1

# Example 4 - Node error log leaking the MongoDB connection string due to an error

$ npm install --silent
$ npm test --silent
(node:38) [DEP0040] DeprecationWarning: The `punycode` module is deprecated. Please use a userland alternative instead.
(Use `node --trace-deprecation ...` to show where the warning was created)
Server is running on  8000  Backend API
(node:38) [DEP0170] DeprecationWarning: The URL mongodb://admin:admin@[redacted by compass]:27017,[redacted by compass]:27017[redacted by compass]:27017/test?authSource=admin&replicaSet=[redacted by compass]&retryWrites=true&w=majority&ssl=true is invalid. Future versions of Node.js will throw an error.
==============Mongodb Database Connected Successfully==============
Database connection successful
 ✔ should establish a successful database connection (1041ms)
 1 passing (1s)

In reality, there are tons of variations of these issues.

Harvesting Secrets

Now finding these secrets is quite cumbersome as groups and projects can contain a lot of jobs, that’s why we introduce you to Pipeleak.

Pipeleak Logo
Pipeleak – Leaky Leek

Pipeleak automatically scans job logs for secrets. Its ruleset can be easily extended and it offers a Trufflehog integration, which allows finding verified credentials.

The Pipeleak scanner generates a log output printing any findings. In this example Pipeleak scans only jobs of projects that the current user is a member of and which match the search query secret-pipelines, filtering the results to high and high-verified only.

$ pipeleak --gitlab https://gitlab.com --token glpat-[cut by compass] --member --artifacts --cookie [cut by compass] --search secret-pipelines --confidence high,high-verified
2024-11-18T10:34:02Z INF Gitlab Version Check revision=40decc847c8 version=17.6.0-pre
2024-11-18T10:34:02Z INF Fetching projects
2024-11-18T10:34:02Z INF Provided GitLab session cookie is valid
2024-11-18T10:34:02Z INF Filtering scanned projects by query=secret-pipelines
2024-11-18T10:34:05Z INF Fetched all projects
2024-11-18T10:34:07Z WRN HIT Artifact (in archive) archive=artifact_archive_file.tar confidence=high file=secrets_in_archive.txt jobName=archives-job ruleName="Generic - 1719" url=gitlab.com/frj1comp/secret-pipelines/-/jobs/8398606421 value="datadog_api_key=secrets.txt file hit "
[cut by compass]
2024-11-18T10:34:07Z WRN HIT confidence=high jobName=build-job ruleName="AWS API Key" url=gitlab.com/frj1comp/secret-pipelines/-/jobs/8398625852 value="ruffle Hog AKIAYVP4CIPPERUVIFXG" Truffle Hog AKIAYVP4CIPPERUVIFXG $ echo "https://gitlab-ci-token:$MASK_ME@g"
[cut by compass]
2024-11-18T10:34:07Z WRN HIT DOTENV: Check artifacts page which is the only place to download the dotenv file artifactUrl=gitlab.com/frj1comp/secret-pipelines/-/jobs/8398606415/-/artifacts confidence=high jobId=8398606415 jobName=dotenv-only-job ruleName="Generic - 1719" value="datadog_api_key=dotenv ONLY file hit, no other artifacts "
[cut by compass]
2024-11-18T10:34:08Z WRN HIT Artifact confidence=high file=an_artifact.txt jobName=artifact-job ruleName="Generic - 1719" url=gitlab.com/frj1comp/secret-pipelines/-/jobs/8398606409 value="datadog_api_key=secret_artifact_value "

Pipeleak can search the textual output of the job log (HIT) and it will scan the generated artifact files as well (HIT Artifact). If an artifact is an archive it will scan the included files (HIT Artifact (in archive)). Moreover, it can scan Dotenv artifacts, which is a report that collects a set of environment variables as artifacts. Finally, you can review the findings and then abuse the newly found credentials in your engagement.

Pipeleak offers additional features like GitLab runners enumeration, accessible CI/CD variables enumeration, and a vulnerability checker. For more information and more extensive docs check out the project on GitHub.

Happy Scanning!

Compass Security Blog – ​Read More

Horns&Hooves Campaign Delivers RATs via Fake Emails and JavaScript Payloads

A newly discovered malware campaign has been found to target private users, retailers, and service businesses mainly located in Russia to deliver NetSupport RAT and BurnsRAT.
The campaign, dubbed Horns&Hooves by Kaspersky, has hit more than 1,000 victims since it began around March 2023. The end goal of these attacks is to leverage the access afforded by these trojans to install stealer

The Hacker News – ​Read More

Energy industry contractor says ransomware attack has limited access to IT systems

The company, ENGlobal Corporation, has restricted employee access to its IT system, limiting it to only essential business operations.

The Record from Recorded Future News – ​Read More

How Attackers Use Corrupted Files to Slip Past Security

New zero-day attack bypasses antivirus, sandboxes, and spam filters using corrupted files. Learn how ANY.RUN’s sandbox detects and…

Hackread – Latest Cybersecurity, Tech, Crypto & Hacking News – ​Read More