Security Key Elements

Efforts taken to safeguard the codebase from external attacks and known vulnerabilities.

Risk: High (non-reviewable code)

This check determines whether the project has generated executable (binary) artifacts in the source repository.

Including generated executables in the source repository increases user risk. Many programming language systems can generate executables from source code (e.g., C/C++ generated machine code, Java .class files, Python .pyc files, and minified JavaScript). Users will often directly use executables if they are included in the source repository, leading to many dangerous behaviors.

Problems with generated executable (binary) artifacts:

  • Binary artifacts cannot be reviewed, allowing possible obsolete or maliciously subverted executables. Reviews generally review source code, not executables, since it's difficult to audit executables to ensure that they correspond to the source code. Over time the included executables might not correspond to the source code.

  • Generated executables allow the executable generation process to atrophy, which can lead to an inability to create working executables. These problems can be countered with verified reproducible builds, but it's easier to implement verified reproducible builds when executables are not included in the source repository (since the executable generation process is less likely to have atrophied).

Allowed by Scorecard:

  • Files in the source repository that are simultaneously reviewable source code and executables, since these are reviewable. (Some interpretive systems, such as many operating system shells, don't have a mechanism for storing generated executables that are different from the source file.)

  • Source code in the source repository generated by other tools (e.g., by bison, yacc, flex, and lex). There are potential downsides to generated source code, but generated source code tends to be much easier to review and thus presents a lower risk. Generated source code is also often difficult for external tools to detect.

  • Generated documentation in source repositories. Generated documentation is intended for use by humans (not computers) who can evaluate the context. Thus, generated documentation doesn't pose the same level of risk.

Remediation steps

  • Remove the generated executable artifacts from the repository.

  • Build from source.

Risk: High (vulnerable to intentional malicious code injection)

This check determines whether a project's default and release branches are protected with GitHub's branch protection or repository rules settings. Branch protection allows maintainers to define rules that enforce certain workflows for branches, such as requiring review or passing certain status checks before acceptance into a main branch, or preventing rewriting of public history.

Note: The following settings queried by the Branch-Protection check require an admin token: DismissStaleReviews, EnforceAdmins, RequireLastPushApproval, RequiresStatusChecks, and UpToDateBeforeMerge. If the provided token does not have admin access, the check will query the branch settings accessible to non-admins and provide results based only on these settings. However, all of these settings are accessible via Repo Rules. EnforceAdmins is calculated slightly differently. This setting is calculated as false if any Bypass Actors are defined on any rule, regardless of if they are admins.

Different types of branch protection protect against different risks:

  • Require code review:

    • requires at least one reviewer, which greatly reduces the risk that a compromised contributor can inject malicious code. The review also increases the likelihood that an unintentional vulnerability in a contribution will be detected and fixed before the change is accepted.

    • requiring two or more reviewers protects even more from the insider risk whereby a compromised contributor can be used by an attacker to LGTM the attacker PR and inject a malicious code as if it was legit.

  • Prevent force push: prevents the use of the --force command on public branches, which overwrites code irrevocably. This protection prevents the rewriting of public history without external notice.

  • Require status checks: ensures that all required CI tests are met before a change is accepted.

Although requiring code review can greatly reduce the chance that unintentional or malicious code enters the "main" branch, it is not feasible for all projects, such as those that don't have many active participants. For more discussion, see Code Reviews.

Additionally, in some cases, these rules will need to be suspended. For example, if a past commit includes illegal content such as child pornography, it may be necessary to use a force push to rewrite the history rather than simply hide the commit.

This test has tiered scoring. Each tier must be fully satisfied to achieve points at the next tier. For example, if you fulfill the Tier 3 checks but do not fulfill all the Tier 2 checks, you will not receive any points for Tier 3.

Note: If Scorecard is run without an administrative access token, the requirements that specify “For administrators” can be safely ignored, and scores will be determined as if all such requirements have been met.

Tier 1 Requirements (3/10 points):

  • Prevent force push

  • Prevent branch deletion

Tier 2 Requirements (6/10 points):

  • Require at least 1 reviewer for approval before merging

  • For administrators: Require branch to be up to date before merging

  • For administrators: Require approval of the most recent reviewable push

Tier 3 Requirements (8/10 points):

  • Require the branch to pass at least 1 status check before merging

Tier 4 Requirements (9/10 points):

  • Require at least 2 reviewers for approval before merging

  • Require review from code owners

Tier 5 Requirements (10/10 points):

  • For administrators: Dismiss stale reviews and approvals when new commits are pushed

  • For administrators: Include administrator for review

GitLab Integration Status:

  • GitLab associates release with commits and not with the branch. Releases are ignored in this portion of the scoring.

Remediation steps

  • Enable branch protection settings in your source hosting provider to avoid forced pushes or deletion of your important branches.

  • For GitHub, check out the steps here.

Risk: Critical (vulnerable to repository compromise)

This check determines whether the project's GitHub Action workflows have dangerous code patterns. Some examples of these patterns are untrusted code checkouts, logging github context and secrets, or use of potentially untrusted inputs in scripts. The following patterns are checked:

Untrusted Code Checkout: This is the misuse of potentially dangerous triggers. This checks if a pull_request_target or workflow_run workflow trigger was used in conjunction with an explicit pull request checkout. Workflows triggered with pull_request_target / workflow_run have written permission to the target repository and access to target repository secrets. With the PR checkout, PR authors may compromise the repository, for example, by using build scripts controlled by the author of the PR or reading the token in memory. This check does not detect whether untrusted code checkouts are used safely, for example, only on pull requests that have been assigned a label.

Script Injection with Untrusted Context Variables: This pattern detects whether a workflow's inline script may execute untrusted input from attackers. This occurs when an attacker adds malicious commands and scripts to a context. When a workflow runs, these strings may be interpreted as code that is executed on the runner. Attackers can add their own content to certain github context variables that are considered untrusted, for example, github.event.issue.title. These values should not flow directly into executable code.

The highest score is awarded when all workflows avoid dangerous code patterns.

Remediation steps

  • Avoid dangerous workflow patterns. See this post for information on avoiding untrusted code checkouts. See this document for information on avoiding and mitigating the risk of script injections.

Risk: Medium (possible vulnerabilities in code)

This check tries to determine if the project uses fuzzing by checking:

  1. if the repository name is included in the OSS-Fuzz project list;

  2. if ClusterFuzzLite is deployed in the repository;

  3. if there are user-defined language-specified fuzzing functions in the repository.

  4. if it contains a OneFuzz integration detection file;

Fuzzing, or fuzz testing, is the practice of feeding unexpected or random data into a program to expose bugs. Regular fuzzing is important to detect vulnerabilities that may be exploited by others, especially since attackers can also use fuzzing to find the same flaws.

Note: A project that fulfills this criterion with other tools may still receive a low score on this test. There are many ways to implement fuzzing, and it is challenging for an automated tool like Scorecard to detect them all. A low score is therefore not a definitive indication that the project is at risk.

Remediation steps

  • Integrate the project with OSS-Fuzz by following the instructions here.

Risk: Low (possible impediment to security review)

This check tries to determine if the project has published a license. It works by using either hosting APIs or by checking standard locations for a file named according to common conventions for licenses.

A license can give users information about how the source code may or may not be used. The lack of a license will impede any kind of security review or audit and create a legal risk for potential users.

Scorecard uses the GitHub License API for GitHub-hosted projects. Otherwise, Scorecard uses its own heuristics to detect a published license file.

On its own, this check will detect files in the top-level directory with any combination of the following names and extensions: LICENSE, LICENCE, COPYING, COPYRIGHT, and having common extensions such as .html, .txt, or .md. It will also detect these files in a directory named LICENSES. (Files in a LICENSES directory are typically named as their SPDX license identifier followed by an appropriate file extension, as described in the REUSE Specification.)

License Requirements:

  • A detected LICENSE, COPYRIGHT, or COPYING filename (6/10 points)

  • The detected file is at the top-level directory (3/10 points)

  • An FSF or OSI license is specified (1/10 points)

Remediation steps

  • Determine which license to apply to your project. For GitHub-hosted projects, follow those instructions to establish a license for your project.

  • For other hosting environments, create the license in a .adoc, .asc, .docx, .doc, .ext, .html, .markdown, .md, .rst, .txt, or .xml, named LICENSE, COPYRIGHT, or COPYING, and place it in the top-level directory. To identify a specific license, use an SPDX license identifier in the filename. Examples include LICENSE.md, Apache-2.0-LICENSE.md or LICENSE-Apache-2.0.

  • Alternately, create a LICENSE directory and add a license file(s) with a name that matches your SPDX license identifier. such as LICENSES/Apache-2.0.txt.

Risk: Medium (possible compromised dependencies)

This check tries to determine if the project pins dependencies used during its build and release process. A "pinned dependency" is a dependency that is explicitly set to a specific hash instead of allowing a mutable version or range of versions. It is currently limited to repositories hosted on GitHub and does not support other source hosting repositories (i.e., Forges).

The check works by looking for unpinned dependencies in Dockerfiles, shell scripts, and GitHub workflows which are used during the build and release process of a project. Special considerations for Go modules treat full semantic versions as pinned due to how the Go tool verifies downloaded content against the hashes when anyone first downloaded the module.

Pinned dependencies reduce several security risks:

  • They ensure that checking and deployment are all done with the same software, reducing deployment risks, simplifying debugging, and enabling reproducibility.

  • They can help mitigate compromised dependencies from undermining the security of the project (in the case where you've evaluated the pinned dependency, you are confident it's not compromised, and a later version is released that is compromised).

  • They are one way to counter dependency confusion (aka substitution) attacks, in which an application uses multiple feeds to acquire software packages (a "hybrid configuration"), and attackers fool the user into using a malicious package via a feed that was not expected for that package.

However, pinning dependencies can inhibit software updates, either because of a security vulnerability or because the pinned version is compromised. Mitigate this risk by:

  • using automated tools to notify applications when their dependencies are outdated;

  • quickly updating applications that do pin dependencies.

For projects hosted on GitHub, you can learn more about dependencies using the GitHub dependency graph.

Remediation steps

  • If your project is producing an application, declare all your dependencies with specific versions in your package format file (e.g. package.json for npm, requirements.txt for python, packages.config for nuget). For C/C++, check in the code from a trusted source and add a README on the specific version used (and the archive SHA hashes).

  • If your project is producing an application and the package manager supports lock files (e.g. package-lock.json for npm), make sure to check these in the source code as well. These files maintain signatures for the entire dependency tree and save it from future exploitation in case the package is compromised.

  • For Dockerfiles used in building and releasing your project, pin dependencies by hash. See Dockerfile for example. If you are using a manifest list to support builds across multiple architectures, you can pin to the manifest list hash instead of a single image hash. You can use a tool like crane to obtain the hash of the manifest list like in this example.

  • For GitHub workflows used in building and releasing your project, pin dependencies by hash. See main.yaml for example. To determine the permissions needed for your workflows, you may use StepSecurity's online tool by ticking the "Pin actions to a full-length commit SHA". You may also tick the "Restrict permissions for GITHUB_TOKEN" to fix issues found by the Token-Permissions check.

  • To help update your dependencies after pinning them, use tools such as those listed for the dependency update tool check.

Risk: Medium (possible insecure reporting of vulnerabilities)

This check tries to determine if the project has published a security policy. It works by looking for a file named SECURITY.md (case-insensitive) in a few well-known directories.

A security policy (typically a SECURITY.md file) can give users information about what constitutes a vulnerability and how to report one securely so that information about a bug is not publicly visible.

This check examines the contents of the security policy file awarding points for those policies that express vulnerability process(es), and disclosure timelines, and have links (e.g., URL(s) and email(s)) to support the users.

Linking Requirements (one or more) (6/10 points):

  • A valid form of an email address to contact for vulnerabilities

  • A valid form of a http/https address to support vulnerability reporting

Free Form Text (3/10 points):

  • Free-form text is present in the security policy file which is beyond simply having an http/https address and/or email in the file

  • The string length of any such links in the policy file does not count toward detecting free-form text

Security Policy Specific Text (1/10 points):

  • Specific text providing basic or general information about vulnerability and disclosure practices, expectations, and/or timelines

  • The text should include a total of 2 or more hits which match (case insensitive) vuln and as in "Vulnerability" or "vulnerabilities"; disclos as "Disclosure" or "disclose"; and numbers which convey expectations of times, e.g., 30 days or 90 days

Remediation steps

  • Place a security policy file SECURITY.md in the root directory of your repository. This makes it easily discoverable by a vulnerability reporter.

  • The file should contain information on what constitutes a vulnerability and a way to report it securely (e.g. issue tracker with private issue support, encrypted email with a published public key). Follow the coordinated vulnerability disclosure guidelines to respond to vulnerability disclosures.

  • For GitHub, see more information here.

Last updated