HelloMavens
HelloMavens

by HelloMavens

Salesforce Security Review

A security assessment of your Salesforce environment against the Security Benchmark for Salesforce (SBS).

Prepared for
CompanyAtlas Cloud Services
Emailincident-response@atlascloud.example
SizeEnterprise
IndustryB2B SaaS
Regulations in scopeSOC 2, ISO 27001, GDPR
Generated May 6, 2026 · SBS vd4304e18e6f3747b04b1a7097b3d03a6036e5a3f · Engine v0.0.0-alpha.10

Section 1

Executive summary

A one-page read on where you stand.

F
Overall risk grade4.5 / 100Critical gaps · 11 critical controls failed — grade capped at C

See the Appendix for what the passes mean and why some controls weren't applicable →

Risk tier breakdown

Top critical findings

  1. SBS-ACS-003 Respondent attests the `Approve Uninstalled Connected Apps` permission is NOT properly restricted with documented justification per holder. End-users with this permission can self-approve OAuth grants.
  2. SBS-ACS-006 Respondent attests the `Use Any API Client` permission is NOT restricted to a few highly-trusted users with documented justification. This permission bypasses Connected App allow-listing.
  3. SBS-AUTH-001 Respondent attests the org-wide SSO enforcement setting is NOT enabled. Without it, users can still authenticate with Salesforce passwords, bypassing the IdP.

Risk by business impact

  • Data breach exposure: 17 access / authentication / data-protection controls failed.
  • Compliance gap: 12 categories scored below 65 / 100 — likely weak spots in audit conversations.

Section 2

Category scorecards

One card per SBS category. Each shows the 0–100 score plus the OWASP and regulation citations relevant to that category.

Access controls
11
2 pass 10 fail
OWASP: A01:2021-Broken Access Control, A04:2021-Insecure Design, A05:2021-Security MisconfigurationRegs: 164.308(a)(4), CC6.1, CC6.3, A.5.15 +7
Authentication
0
0 pass 4 fail
OWASP: A07:2021-Identification and Authentication Failures, A01:2021-Broken Access Control, A05:2021-Security MisconfigurationRegs: 164.312(d), CC6.1, CC6.7, A.5.16 +7
Code security
0
0 pass 4 fail
OWASP: A06:2021-Vulnerable and Outdated Components, A08:2021-Software and Data Integrity Failures, A03:2021-InjectionRegs: 164.308(a)(1)(ii)(A), CC7.1, CC8.1, A.8.25 +14
Customer portals
0
0 pass 5 fail
OWASP: A01:2021-Broken Access Control, A04:2021-Insecure Design, A05:2021-Security MisconfigurationRegs: 164.308(a)(4), 164.312(a), CC6.1, CC6.6 +12
Data protection
18
1 pass 3 fail
OWASP: A02:2021-Cryptographic Failures, A04:2021-Insecure Design, A08:2021-Software and Data Integrity FailuresRegs: 164.308(a)(1)(ii)(A), 164.312(b), CC6.1, CC6.7 +18
Deployments
0
0 pass 6 fail
OWASP: A01:2021-Broken Access Control, A05:2021-Security Misconfiguration, A08:2021-Software and Data Integrity FailuresRegs: 164.308(a)(8), CC8.1, A.5.15, A.8.32 +16
FDNS
0
0 pass 1 fail
OWASP: A04:2021-Insecure Design, A05:2021-Security MisconfigurationRegs: 164.308(a)(8), 164.316(b), CC1.4, CC2.2 +5
FILE
0
0 pass 3 fail
OWASP: A01:2021-Broken Access Control, A04:2021-Insecure Design, A07:2021-Identification and Authentication FailuresRegs: CC6.1, CC6.6, A.5.10, A.5.13 +9
Integrations
22
1 pass 3 fail
OWASP: A03:2021-Injection, A04:2021-Insecure Design, A05:2021-Security MisconfigurationRegs: CC6.6, CC6.7, A.5.19, A.8.20 +8
MON
0
0 pass 5 fail
OWASP: A09:2021-Security Logging and Monitoring Failures, A05:2021-Security MisconfigurationRegs: 164.308(a)(1)(ii)(D), 164.312(b), CC7.2, CC7.3 +7
Connected apps & OAuth
0
0 pass 4 fail
OWASP: A01:2021-Broken Access Control, A05:2021-Security Misconfiguration, A08:2021-Software and Data Integrity FailuresRegs: 164.308(a)(4), CC6.1, CC6.6, A.5.15 +9
Security configuration
0
0 pass 2 fail
OWASP: A05:2021-Security MisconfigurationRegs: 164.308(a)(8), CC7.1, A.5.36, A.8.9 +1

Section 3

Remediation detail

Every failed control with what to fix, why it matters, how to fix it, and how to verify the fix worked.

SBS-ACS-003 · Critical
Documented Justification for `Approve Uninstalled Connected Apps` Permission
Failed
What's wrong

Respondent attests the `Approve Uninstalled Connected Apps` permission is NOT properly restricted with documented justification per holder. End-users with this permission can self-approve OAuth grants.

Why it matters

The Approve Uninstalled Connected Apps permission allows users to bypass Connected App usage restrictions and self-authorize any OAuth application without administrator approval. This establishes an uncontrolled security boundary: users with this permission can grant external applications access to Salesforce data without oversight, enabling data exfiltration, unauthorized integrations, and potential account compromise. Unlike other permissions that require additional failures to exploit, this permission directly enables unauthorized third-party access the moment it is misassigned—making it a primary security boundary that must be tightly controlled.

Step-by-step fix
  1. Remove the Approve Uninstalled Connected Apps permission from any profile, permission set, or permission set group that lacks a documented justification or is assigned to end-users.
  2. For any authorization that legitimately requires this permission (e.g., administrators or developers testing connected apps), add or update the rationale in the system of record to clearly justify the need and identify the specific role or use case.
  3. Ensure that connected apps required for business operations are properly installed and allowlisted rather than relying on this permission for end-user access.
  4. Reconcile and update the system of record to ensure complete and accurate inventory of all assignments of this permission.
Validation
  1. Enumerate all profiles, permission sets, and permission set groups that include the Approve Uninstalled Connected Apps permission using Salesforce Setup, Metadata API, Tooling API, or an automated scanner.
  2. Compare the enumerated list against the organization's designated system of record for this permission.
  3. Verify that every profile, permission set, and permission set group granting "Approve Uninstalled Connected Apps" has a corresponding entry in the system of record.
  4. Confirm that each entry includes:
    • A clear business or technical justification for requiring this permission,
    • Identification of the user role or persona (e.g., administrator, developer, integration manager),
    • Any applicable exception or approval documentation, and
    • Confirmation that the use case is limited to testing or managing connected app integrations.
  5. Verify that the permission is not assigned to end-user profiles or permission sets intended for general business users.
  6. Flag as noncompliant any authorizations lacking documentation, justification, or assigned to unauthorized user populations.
Effort: 4–8h per entityDifficulty: HardScope: entity
SBS-ACS-006 · Critical
Documented Justification for `Use Any API Client` Permission
Failed
What's wrong

Respondent attests the `Use Any API Client` permission is NOT restricted to a few highly-trusted users with documented justification. This permission bypasses Connected App allow-listing.

Why it matters

The Use Any API Client permission allows users to bypass API Access Control entirely, authorizing any OAuth-connected application without requiring it to be pre-vetted or allowlisted. This establishes an uncontrolled security boundary: users with this permission can grant data access to arbitrary external applications, enabling data exfiltration, unauthorized integrations, and potential account compromise without administrator oversight. Granting this permission to unauthorized personnel completely defeats the purpose of API Access Control, creating a direct path to unauthorized third-party access that requires no other control to fail.

Step-by-step fix
  1. Remove the Use Any API Client permission from any profile, permission set, or permission set group that lacks a documented justification or is assigned to end-users.
  2. For any authorization that legitimately requires this permission (e.g., administrators or developers testing connected apps), add or update the rationale in the system of record to clearly justify the need and identify the specific role or use case.
  3. Ensure that connected apps required for business operations are properly vetted and allowlisted rather than relying on this permission for end-user access.
  4. Reconcile and update the system of record to ensure complete and accurate inventory of all assignments of this permission.
Validation
  1. Enumerate all profiles, permission sets, and permission set groups that include the Use Any API Client permission using Salesforce Setup, Metadata API, Tooling API, or an automated scanner.
  2. Compare the enumerated list against the organization's designated system of record for this permission.
  3. Verify that every profile, permission set, and permission set group granting Use Any API Client has a corresponding entry in the system of record.
  4. Confirm that each entry includes:
    • A clear business or technical justification for requiring this permission,
    • Identification of the user role or persona (e.g., administrator, developer, integration manager),
    • Any applicable exception or approval documentation, and
    • Confirmation that the use case is limited to testing or managing connected app integrations.
  5. Verify that the permission is not assigned to end-user profiles or permission sets intended for general business users.
  6. Flag as noncompliant any authorizations lacking documentation, justification, or assigned to unauthorized user populations.
Effort: ~3hDifficulty: MediumScope: inventory
SBS-AUTH-001 · Critical
Enable Organization-Wide SSO Enforcement Setting
Failed
What's wrong

Respondent attests the org-wide SSO enforcement setting is NOT enabled. Without it, users can still authenticate with Salesforce passwords, bypassing the IdP.

Why it matters

Without the org-level SSO enforcement setting enabled, users can authenticate directly to Salesforce using local credentials—creating a parallel authentication path outside centralized identity management. This establishes an uncontrolled security boundary: password-based attacks (credential stuffing, phishing, brute force) can target Salesforce directly, enabling unauthorized access without requiring any other control to fail. Attackers bypass organizational identity controls, MFA policies, and session management enforced at the IdP layer. This setting is the primary technical control that establishes the SSO security boundary.

Step-by-step fix
  1. Navigate to Setup → Single Sign-On Settings.
  2. Enable Disable login with Salesforce credentials.
  3. Validate that SSO is properly configured and functional before enabling this setting to prevent lockout.
  4. Ensure approved break-glass or administrative accounts have the "Is Single Sign-On Enabled" permission removed via their profiles or permission sets so they can still authenticate if needed.
Validation
  1. Retrieve SingleSignOnSettings (part of SecuritySettings) via Metadata API or navigate to Setup → Single Sign-On Settings in the UI.
  2. Verify that isLoginWithSalesforceCredentialsDisabled is set to true.
  3. Flag the org if the setting is not enabled.
Effort: ~1hDifficulty: EasyScope: org
SBS-AUTH-004 · Critical
Enforce Strong Multi-Factor Authentication for External Users with Substantial Access to Sensitive Data
Failed
What's wrong

Respondent attests external users with access to sensitive data are NOT required to use strong multi-factor authentication. External-user MFA gaps are a frequent source of customer data breaches.

Why it matters

Without enforced multi-factor authentication, external users with substantial access to sensitive data can authenticate using only a password—establishing a single point of failure for the authentication boundary. External users present elevated credential risk due to weaker identity proofing, less organizational oversight, and exposure to consumer-grade phishing attacks. Attackers who compromise a single password through phishing, credential stuffing, or account takeover gain direct access to sensitive data without requiring any other control to fail. This creates an unprotected authentication path to high-value data that bypasses the defense-in-depth protections applied to internal users.

Step-by-step fix
  1. Apply the “Multi-Factor Authentication for User Interface Logins” permission through profiles or permission sets for all active external users with substantial access to sensitive data.
  2. Configure suitable strong second-factor options in Setup -> Identity -> Identity Verification (e.g., authenticator app, FIDO2 security key).
Validation
  1. Enumerate all active external human users with substantial access to sensitive data.
  2. Validate that in-scope users have the “Multi-Factor Authentication for User Interface Logins” permission through profiles or permission sets.
  3. Flag any in-scope users who lack the “Multi-Factor Authentication for User Interface Logins” permission.
Effort: 4–8h per entityDifficulty: HardScope: entity
SBS-CODE-004 · Critical
Prevent Sensitive Data in Application Logs
Failed
What's wrong

Respondent attests they cannot confirm application logs are free of passwords, tokens, or sensitive personal data. Log exfiltration is a canonical breach vector.

Why it matters

When application logs capture sensitive data, attackers who compromise low-privilege accounts with Read access to log storage can exfiltrate credentials, PII, or regulated data without triggering access controls on the original source objects—transforming a logging framework into a data leakage vector. In regulated industries, a compromised administrator querying log objects can extract thousands of customer records in minutes, with the audit trail showing only "legitimate" queries. During breach investigations, logs become evidence of regulatory violations rather than forensic tools, triggering consent orders and significant financial penalties.

Step-by-step fix
  1. Implement mechanisms to prevent sensitive data from being written to logs:
    public class SecureLogger {
        public static void logInfo(String message, Map<String, Object> context) {
            // Sanitize context before logging
            Map<String, Object> sanitized = sanitizeContext(context);
            Logger.info(message, sanitized);
        }
        
        private static Map<String, Object> sanitizeContext(Map<String, Object> ctx) {
            Map<String, Object> result = new Map<String, Object>();
            for (String key : ctx.keySet()) {
                // Mask sensitive fields, log IDs instead of full records
                if (key.containsIgnoreCase('password') || 
                    key.containsIgnoreCase('token') || 
                    key.containsIgnoreCase('ssn')) {
                    result.put(key, '***REDACTED***');
                } else if (ctx.get(key) instanceof SObject) {
                    result.put(key, ((SObject)ctx.get(key)).Id);
                } else {
                    result.put(key, ctx.get(key));
                }
            }
            return result;
        }
    }
    
  2. Audit existing log records in custom objects and purge Salesforce debug logs containing sensitive data.
  3. Update logging calls to avoid capturing sensitive data:
    // BAD - logs full account with SSN field
    System.debug('Processing: ' + acc);
    Logger.info('Processing account', new Map<String, Object>{'account' => acc});
    
    // GOOD - logs only record ID
    System.debug('Processing account: ' + acc.Id);
    SecureLogger.logInfo('Processing account', new Map<String, Object>{
        'accountId' => acc.Id,
        'recordCount' => 1
    });
    
  4. Consider implementing compensating controls such as automated testing that validates log outputs for sensitive data patterns, code review checks for logging security, or static analysis rules that detect common sensitive data exposure patterns.
Validation
  1. Sample representative Apex classes from high-risk areas (customer-facing functionality, payment processing, authentication flows) to identify logging statements in both custom frameworks and System.debug() calls.
  2. Examine log message construction to detect patterns that may capture the types of sensitive data listed above.
  3. Query recent log records stored in custom objects and review Salesforce debug logs to inspect actual log content for sensitive data:
    • Search for patterns matching SSNs, credit card numbers, email addresses, phone numbers
    • Identify authentication tokens, session IDs, or API keys in log messages
    • Flag any log records containing regulated data or PII
  4. Verify that mechanisms exist to prevent sensitive data from being logged (such as sanitization functions, code review checks, or automated validation).
Effort: 4–8h per entityDifficulty: HardScope: entity
SBS-CPORTAL-001 · Critical
Prevent Insecure Direct Object Reference (IDOR) in Portal Apex
Failed
What's wrong

Respondent attests they cannot confirm portal Apex methods are free of parameter-based record access. This is the canonical IDOR (insecure direct object reference) vector for portals.

Why it matters

If portal-exposed Apex trusts user-controlled parameters to determine record access, external users can manipulate inputs to retrieve or modify unauthorized records. This may enable record enumeration, data exfiltration, or data corruption. IDOR vulnerabilities represent a critical authorization boundary failure.

Step-by-step fix
  • Enforce with sharing on portal-facing classes by default.
  • Scope queries using the running user's context where possible.
  • If record IDs are accepted as parameters, verify access using sharing or UserRecordAccess before returning or modifying data.
  • Remove user-controlled query structure and whitelist allowable filter inputs.
  • Enforce CRUD and FLS on all returned or modified records.
Validation
  1. Identify all Apex classes exposing @AuraEnabled, @InvocableMethod, or @RestResource methods accessible to portal users.

  2. Review method parameters of type Id, String, collections, or maps that could influence record access.

  3. Verify that:

    • Classes run with sharing or implement equivalent authorization checks.
    • Record access is validated before query or DML.
    • CRUD and FLS are enforced.
    • Dynamic SOQL does not incorporate unsanitized user input.
  4. Attempt to access unauthorized records by manipulating record IDs or filter inputs from a portal user session.

  5. Flag any method that relies solely on user-supplied parameters to control record access as noncompliant.

Effort: 4–8h per entityDifficulty: HardScope: entity
SBS-CPORTAL-002 · Critical
Restrict Guest User Record Access
Failed
What's wrong

Respondent attests guest users are NOT properly restricted to login/signup-only access. Guest user breaches are the most public class of Salesforce data leaks.

Why it matters

Guest users represent the highest-risk trust boundary in Salesforce portals—they are unauthenticated, have zero accountability, generate minimal audit trail, and operate with potential adversarial intent. When guest users are granted object permissions or can invoke custom Apex methods, attackers can systematically enumerate organizational data without even creating an account. Historical Salesforce security updates have repeatedly addressed guest user permission defaults because vendors consistently misconfigure this boundary. A single guest-accessible method that queries user records, cases, accounts, or custom objects creates a public API for data exfiltration accessible to anyone on the internet. This constitutes a Critical boundary violation: unauthenticated attackers access organizational data with no authentication required.

Step-by-step fix
  1. Remove all object-level permissions from guest user profiles except those explicitly required for authentication flows.
  2. Audit and remove guest user access to any custom Apex methods that query or return organizational data.
  3. For public data requirements (knowledge articles, case submission), implement service layer pattern:
    @AuraEnabled
    public static List<Knowledge__kav> getPublicArticles() {
        if (UserInfo.getUserType() == 'Guest') {
            // Allowlist-based, no parameters accepted
            return [SELECT Id, Title, Summary FROM Knowledge__kav 
                    WHERE PublicationStatus = 'Online' 
                    AND IsVisibleInPkb = true 
                    LIMIT 10];
        }
        throw new AuraHandledException('Access denied');
    }
    
  4. Implement network-level rate limiting and CAPTCHA for guest-accessible endpoints.
  5. Review Salesforce security updates and apply guest user permission restrictions from recent releases.
Validation
  1. Identify all guest user profiles used by customer portal sites (typically named "Site Guest User" or similar).
  2. Review object-level permissions for guest user profiles and verify that all business-related standard and custom objects have Read, Create, Edit, Delete permissions set to disabled.
  3. Enumerate all custom Apex classes containing @AuraEnabled methods and verify that none are accessible to guest users (either by checking profile permissions or testing invocation from guest context).
  4. For any guest-accessible functionality beyond authentication flows, verify implementation of service layer architecture with explicit access controls.
  5. Test by accessing the portal without authentication and attempting to invoke Apex methods or query objects via built-in Lightning controllers.
  6. Flag any guest user object permissions or method access as noncompliant.
Effort: ~1hDifficulty: EasyScope: org
SBS-CPORTAL-004 · Critical
Prevent Parameter-Based Record Access in Portal-Exposed Flows
Failed
What's wrong

Respondent attests portal-exposed Flows accept user-supplied input variables that determine which records are accessed. This is an IDOR vulnerability — Autolaunched Flows run in system context by default.

Why it matters

Flows accepting user-controlled input variables for record access create IDOR vulnerabilities allowing external users to access any record in the org. Because Autolaunched Flows run in system context without sharing by default, a single flow accepting a record ID input parameter bypasses all permissions and sharing rules. This constitutes a Critical boundary violation: unauthorized users access data they should never see, with no compensating controls required to fail.

Step-by-step fix
  1. Refactor flows to eliminate input variables controlling record access.
  2. Derive accessible records from authenticated user context (e.g., $User.Id, $User.ContactId, $User.AccountId).
  3. Configure flows to run in user context ("With Sharing") where available.
Validation
  1. Using the inventory from SBS-CPORTAL-003, identify all portal-exposed Autolaunched Flows.
  2. For each flow, examine input variables for types that could contain record IDs (Text, Record, Text Collection).
  3. Review flow logic to determine if input variables influence Get Records, Update Records, or Delete Records elements.
  4. Flag any flow accepting user-supplied input variables that control record access as noncompliant.
Effort: 4–8h per entityDifficulty: HardScope: entity
SBS-DEP-005 · Critical
Implement Secret Scanning for Salesforce Source Repositories
Failed
What's wrong

Respondent attests they do NOT scan source repositories for committed secrets. Leaked Salesforce credentials are a leading source of supply-chain breaches.

Why it matters

Exposed Salesforce credentials in source repositories represent a direct path to unauthorized production access—a supply chain attack vector that bypasses all other access controls. Contractors, consultants, or any party with repository access can extract hardcoded tokens and authenticate directly to production orgs with the full permissions of the deployment identity. Attackers who compromise source control systems or CI/CD infrastructure gain immediate access to production Salesforce environments. Unlike other credential exposures, Salesforce access tokens often have broad administrative permissions and long validity periods, making them high-value targets. Organizations cannot detect this exposure through Salesforce audit logs alone—the attacker authenticates with valid credentials, and their activity appears legitimate.

Step-by-step fix
  1. Enable secret scanning on all repositories containing Salesforce code, metadata, or deployment configurations using platform-native tools or third-party secret scanning solutions.
  2. Configure scanning rules to detect Salesforce-specific credential patterns in addition to general secrets.
  3. Implement pre-commit hooks or CI checks that block commits containing detected secrets.
  4. Immediately rotate any Salesforce access tokens, refresh tokens, or credentials that have been committed to version control—even if subsequently removed, as they persist in git history.
  5. Migrate credential storage to secure secrets management solutions (e.g., CI/CD platform secrets, vault systems) and remove all hardcoded credentials from repositories.
  6. Establish a periodic rotation schedule for Salesforce deployment credentials to limit the window of exposure if a secret is leaked.
Validation
  1. Identify all repositories containing Salesforce metadata, SFDX projects, deployment scripts, or CI/CD pipeline configurations.
  2. Verify that automated secret scanning is enabled on each repository—either through the source control platform's native capabilities (e.g., GitHub Secret Scanning, GitLab Secret Detection) or through third-party tooling.
  3. Confirm that the scanning configuration includes patterns for Salesforce-specific secrets (access tokens, refresh tokens, consumer keys/secrets, session IDs).
  4. Review scanning logs or dashboards to verify the tool is actively running and producing results.
  5. Verify that detected secrets trigger alerts and block merges or deployments until remediated.
  6. Flag noncompliance if any Salesforce-related repository lacks active secret scanning coverage.
Effort: ~6hDifficulty: MediumScope: mechanism
SBS-OAUTH-001 · Critical
Require Formal Installation of Connected Apps
Failed
What's wrong

Respondent attests at least some Connected Apps are authorized ad-hoc by individual users rather than formally installed. Ad-hoc OAuth grants bypass admin oversight.

Why it matters

Without formal installation, Connected Apps operate outside organizational control—inheriting security configuration from the external app developer rather than the Salesforce administrator. This establishes an unmanaged security boundary: refresh token lifetimes, session policies, and IP restrictions cannot be enforced, allowing tokens to persist indefinitely and enabling unauthorized access from any location. Attackers who compromise a user-authorized OAuth token gain persistent access that administrators cannot revoke or constrain through standard Connected App policies.

Step-by-step fix
  1. Formally install any connected app that appears only as a user-authorized OAuth connection.
  2. Configure the installed connected app's policies, including refresh token and session security settings.
  3. Remove the user-authorized OAuth connections that are now superseded by the installed connected app.
Validation
  1. Enumerate all user-authorized OAuth connected apps via Setup or the Tooling/Metadata API.
  2. Identify all connected apps that are not formally installed as managed or unmanaged connected apps.
  3. Flag any connected app that is used but not formally installed as noncompliant.
Effort: 4–8h per entityDifficulty: HardScope: entity
SBS-OAUTH-002 · Critical
Require Profile or Permission Set Access Control for Connected Apps
Failed
What's wrong

Respondent attests at least one Connected App is set to "available to all users" rather than gated by profile or permission set.

Why it matters

Without explicit profile or permission set access control, Connected Apps may allow any user in the org to authenticate—bypassing the principle of least privilege and creating an uncontrolled access boundary. This enables unauthorized users to establish OAuth sessions with external systems, potentially exfiltrating data or performing actions beyond their intended scope. The lack of access scoping also eliminates audit visibility into who is authorized to use each integration, preventing detection of unauthorized access patterns.

Step-by-step fix
  1. For each connected app lacking profile or permission set access control, create or update profiles or permission sets to define which users are authorized to access the app.
  2. Assign the appropriate profiles or permission sets to the connected app configuration.
  3. Verify that no users can access the connected app without explicit authorization.
Validation
  1. Enumerate all formally installed connected apps via Setup or the Metadata API.
  2. For each installed connected app, verify that access is granted only through assigned profiles or permission sets.
  3. Flag any connected app that lacks access scoping via profiles or permission sets as noncompliant.
Effort: 4–8h per entityDifficulty: HardScope: entity
SBS-ACS-001 · High
Enforce a Documented Permission Set Model
Failed
What's wrong

Respondent attests they do NOT maintain a documented permission set model. Auditors expect a written, enforced model in a system of record.

Why it matters

Without a documented and enforced permission set model, organizations lose visibility into their authorization structure—accumulating ad hoc permission constructs created for one-time needs that are never reviewed or removed. This results in privilege sprawl, inconsistent access patterns, and inability to audit who has what access and why. Security teams cannot assess authorization posture, detect drift, or investigate access-related incidents when no authoritative model exists to compare against. The lack of continuous enforcement means unauthorized or excessive permissions can persist indefinitely without detection.

Step-by-step fix
  1. Update or deprecate noncompliant profiles, permission sets, and permission set groups to align with the documented permission set model.
  2. Migrate users off legacy or misaligned authorization constructs.
  3. Implement or enhance automated enforcement to ensure continuous alignment with the defined model.
  4. Update the system-of-record documentation as the model changes.
Validation
  1. Obtain the organization's documented permission set model from the designated system of record.
  2. Enumerate all Profiles, Permission Sets, and Permission Set Groups using Salesforce Setup, Metadata API, or Tooling API.
  3. Compare each enumerated item against the documented model to determine whether:
    • Its purpose or persona aligns with the model.
    • Its included permissions conform to the model's structure and boundaries.
    • Its naming and classification match the documented conventions.
  4. Identify any profiles, permission sets, or permission set groups that do not conform to the model.
  5. Verify that the organization has a process or automation that enforces model compliance in near real time (e.g., continuous scanning, pipelines, or governance workflows).
Effort: ~6hDifficulty: MediumScope: mechanism
SBS-ACS-002 · High
Documented Justification for All `API-Enabled` Authorizations
Failed
What's wrong

Respondent attests they do NOT have documented justification for every `API Enabled` user. Programmatic access without documented need expands the attack surface.

Why it matters

Without documented justification for API-enabled authorizations, organizations lose visibility into which users and systems can programmatically access Salesforce data at scale. The API Enabled permission enables large-scale data extraction, bulk modification, and automated operations—capabilities that create significant exposure when granted without oversight. Undocumented API access paths accumulate over time, preventing security teams from assessing data exfiltration risk, investigating suspicious API activity, or enforcing least privilege across automated access patterns.

Step-by-step fix
  1. Remove the API Enabled permission from any profile, permission set, or permission set group that lacks a documented justification and is not required for business operations.
  2. For any authorization that legitimately requires API access, add or update the rationale in the system of record to clearly justify the need.
  3. Reconcile and update the system of record to ensure complete and accurate inventory of all API-enabled authorizations.
Validation
  1. Enumerate all profiles, permission sets, and permission set groups that include the API Enabled permission using Salesforce Setup, Metadata API, Tooling API, or an automated scanner.
  2. Compare the enumerated list against the organization’s designated system of record for API-enabled authorizations.
  3. Verify that every profile, permission set, and permission set group granting “API Enabled” has a corresponding entry in the system of record.
  4. Confirm that each entry includes:
    • A clear business or technical justification for API access, and
    • Any applicable exception or approval documentation.
  5. Flag as noncompliant any authorizations lacking documentation or justification.
Effort: 4–8h per entityDifficulty: HardScope: entity
SBS-ACS-004 · High
Documented Justification for All Super Admin–Equivalent Users
Failed
What's wrong

Respondent attests they do NOT have documented justification for super-admin-equivalent users.

Why it matters

Without documented justification for Super Admin–equivalent users, organizations lose visibility into who possesses unrestricted access to the entire Salesforce environment. These users can read and modify all data, manage user accounts, and alter the security posture of the org without oversight. Undocumented Super Admin access prevents security teams from assessing breach impact, investigating administrative actions, or maintaining accountability for the most sensitive operations. The inability to identify and justify these users also prevents effective access reviews and creates persistent exposure from forgotten or orphaned administrative accounts.

Step-by-step fix
  1. Remove one or more of the Super Admin–equivalent permissions from any user who does not have a documented business or technical justification.
  2. For users who legitimately require this level of access, add or update rationale within the system of record.
  3. Reassess user access to ensure alignment with least privilege, reducing broad permissions where narrower privileges are sufficient.
Validation
  1. Enumerate all users who simultaneously possess the following permissions through any profile, permission set, or permission set group:
    • View All Data
    • Modify All Data
    • Manage Users
  2. Compile a list of all users meeting the criteria for Super Admin–equivalent access.
  3. Compare the list against the organization’s system of record.
  4. Verify that each Super Admin–equivalent user has corresponding documentation that includes:
    • A clear business or technical justification for requiring this level of access, and
    • Any relevant exception or approval records.
  5. Flag as noncompliant any users with Super Admin–equivalent access lacking documentation or justification.
Effort: 4–8h per entityDifficulty: HardScope: entity
SBS-ACS-005 · High
Only Use Custom Profiles for Active Users
Failed
What's wrong

Respondent attests they have active users on the standard `Standard User` profile, which cannot be tightened safely without breaking other users on the same profile.

Why it matters

Standard profiles are managed by Salesforce, not the organization—meaning Salesforce can enable permissions and object access on these profiles when features are released or platform updates occur without administrator approval. This creates an uncontrolled change vector: users assigned to standard profiles may gain new capabilities unexpectedly, bypassing the organization's authorization governance. Standard profiles are also overly permissive by default (e.g., "Standard User" grants "View Setup," "System Administrator" grants developer-level permissions), making it impossible to enforce least privilege. Without custom profiles, organizations cannot investigate authorization changes or maintain accountability for who approved which permissions.

Step-by-step fix
  1. Setup a custom profile for each standard profile that is used
  2. Manage permissions and object access on these profiles to be compliant with the other controls of the SBS
  3. Assign the new custom profiles to your active users, following the principle of "least privilege access"
Validation
  1. Enumerate all human users that are "Active" (IsActive = true on the user flag)
  2. Flag all users noncompliant that use a standard profile (IsCustom = false on the profile metadata)
Effort: 4–8h per entityDifficulty: HardScope: entity
SBS-ACS-007 · High
Maintain Inventory of Non-Human Identities
Failed
What's wrong

Respondent attests they do NOT maintain a current inventory of non-human accounts. Without an inventory, no other NHI control can be enforced.

Why it matters

Without a comprehensive inventory of non-human identities, organizations cannot detect, investigate, or respond to security incidents involving automated access. Non-human identities are frequently created for integrations or automation projects and then forgotten—accumulating as orphaned accounts with persistent credentials and elevated access. Security teams cannot assess which automated systems access Salesforce data, identify compromised integration credentials, or scope the impact of a vendor breach. This loss of visibility prevents effective governance of automated access and creates persistent security exposure from untracked machine accounts.

Step-by-step fix
  1. Query Salesforce to identify all potential non-human identities using the criteria in the audit procedure
  2. For each identified identity, document: name, type (integration/bot/API), purpose, business owner, creation date
  3. Establish a process to update the inventory when non-human identities are created, modified, or deactivated
  4. Implement quarterly reviews of the inventory to identify and deactivate unused accounts
  5. Store the inventory in an authoritative system of record accessible to security and compliance teams
Validation
  1. Request the organization's inventory of non-human identities
  2. Query Salesforce for all users where IsActive = true and any of the following conditions apply:
    • Username contains "integration", "api", "bot", "automation", or "service"
    • Profile name contains "Integration", "API", or similar indicators
    • User has "API Only User" permission enabled
    • User is associated with Einstein Bot or Flow automation
  3. Compare the inventory to the query results to identify discrepancies
  4. Verify the inventory includes: identity name, type, purpose, business owner, creation date, and last login date
  5. Confirm the inventory is reviewed and updated at least quarterly
Effort: ~3hDifficulty: MediumScope: inventory
SBS-ACS-008 · High
Restrict Broad Privileges for Non-Human Identities
Failed
What's wrong

Respondent attests their non-human accounts have broader permissions than they need. Over-privileged service accounts are a leading source of breach blast-radius.

Why it matters

Without documented justification for broad non-human identity privileges, organizations lose visibility into which automated systems can bypass sharing rules or perform administrative operations. Non-human identities operate without human judgment, making over-privileged automation a high-impact target—compromised credentials can result in complete data extraction, system-wide configuration changes, or persistent backdoor access. Many non-human identities are granted excessive permissions during initial setup and never reviewed, creating long-lived security exposure that security teams cannot detect, investigate, or remediate without knowing which identities have which privileges and why.

Step-by-step fix
  1. For each non-human identity with broad privileges, evaluate whether the permission is genuinely required for the identity's function
  2. Remove broad privileges that are not necessary; replace with more granular permissions where possible
  3. For non-human identities that legitimately require broad privileges, document:
    • Specific business function requiring the permission
    • Why more granular permissions cannot satisfy the requirement
    • Business owner and technical owner
    • Approval from security or compliance team
  4. Implement a formal approval process for granting broad privileges to non-human identities
  5. Establish periodic review (at least annually) of all non-human identities with broad privileges
Validation
  1. Using the non-human identity inventory from SBS-ACS-006, identify all non-human identities
  2. For each non-human identity, query assigned permissions through profiles, permission sets, and permission set groups
  3. Flag any non-human identity with one or more of the following permissions:
    • View All Data
    • Modify All Data
    • Manage Users
    • Author Apex
    • Customize Application
    • Any permission that bypasses sharing rules or grants administrative access
  4. For each flagged identity, verify that documented business justification exists explaining why the permission is required
  5. Confirm the justification was approved by appropriate stakeholders (security, compliance, or management)
Effort: 4–8h per entityDifficulty: HardScope: entity
SBS-ACS-011 · High
Enforce Governance of Access and Authorization Changes
Failed
What's wrong

Respondent attests access changes do NOT go through a documented approval process with an audit trail. Without governance, the permission model drifts silently.

Why it matters

Without enforced governance over access changes, organizations lose visibility and control over how privileges are granted and modified. Ad hoc access changes increase the risk of excessive privileges, unauthorized access, and violations of least-privilege principles. The absence of approval, justification, or auditability impairs incident investigation, undermines access reviews, and weakens compliance evidence for audits involving identity, access management, and change control.

Step-by-step fix
  1. Establish and document a formal governance process for access and authorization changes.
  2. Require approval and business justification for all access modifications.
  3. Ensure access changes are recorded in an auditable system of record.
Validation
  1. Retrieve evidence of the organization's documented process governing access and authorization changes.
  2. Identify access-related changes made during a representative review period.
  3. For a sample of changes, verify:
    • An approval record exists prior to implementation
    • Business justification is documented
    • The change is traceable to an identifiable request
    • The implemented change is recorded in available audit or change history records
  4. Identify any access changes lacking approval, justification, or auditability as noncompliant.
Effort: ~6hDifficulty: MediumScope: mechanism
SBS-CODE-003 · High
Implement Persistent Apex Application Logging
Failed
What's wrong

Respondent attests they do NOT have a persistent Apex logging framework. Without it, security-relevant application events are unrecoverable.

Why it matters

Salesforce debug logs are transient, size-limited, and automatically purged—making them unsuitable for forensic analysis or security investigations. Without persistent application logging, organizations cannot reliably reconstruct access patterns, detect anomalous behavior, or investigate security incidents after the fact. This impairs the ability to identify compromise, attribute malicious activity, or understand the scope of a breach—significantly extending attacker dwell time and reducing accountability for actions taken within the system.

Step-by-step fix
  1. Implement or install an Apex logging framework designed for persistent log storage.
  2. Create or configure a custom object (or equivalent durable storage) to store log records.
  3. Update Apex code to route log events through the framework.
  4. Train engineering and security teams to use persistent logs instead of debug logs for investigations.
Validation
  1. Review the Salesforce org for the presence of an Apex logging framework implemented as one or more Apex classes dedicated to log generation and persistence.
  2. Verify that the framework writes logs to durable storage, such as a custom object purpose-built for log retention.
  3. Confirm that operational and security investigations rely on this persistent logging mechanism rather than Salesforce debug logs.
  4. Inspect recent log records to ensure the framework is actively capturing runtime events.
Effort: ~6hDifficulty: MediumScope: mechanism
SBS-CPORTAL-003 · High
Inventory Portal-Exposed Apex Classes and Flows
Failed
What's wrong

Respondent attests no inventory of portal-exposed Apex classes and Flows exists. External attack surface cannot be assessed; security testing has no authoritative scope.

Why it matters

Without a complete inventory of portal-exposed components, organizations cannot assess their external attack surface or enforce security reviews for externally accessible code. Security teams lose visibility into which business logic external users can invoke, preventing effective security testing, incident response, and access governance. This impairs the ability to detect unauthorized exposure of sensitive functionality or identify components requiring security hardening.

Step-by-step fix
  1. Enumerate all Apex classes containing @AuraEnabled methods.
  2. Enumerate all Autolaunched Flows embedded in Experience Cloud sites.
  3. For each component, document which portal user profiles and permission sets have access.
  4. Store the inventory in the designated system of record.
  5. Establish a process to update the inventory when new components are exposed to portals.
Validation
  1. Request the organization's inventory of portal-exposed Apex classes and Flows from the designated system of record.
  2. Query all Apex classes with @AuraEnabled methods accessible to portal user profiles.
  3. Query all Autolaunched Flows invoked from Experience Cloud pages or components.
  4. Verify each component appears in the inventory with documentation of which portal profiles can access it.
  5. Flag any portal-exposed component missing from the inventory as noncompliant.
Effort: ~1hDifficulty: EasyScope: org
SBS-CPORTAL-005 · High
Conduct Penetration Testing for Portal Security
Failed
What's wrong

Respondent attests portal penetration testing has not been performed (or is not on a defined cadence). Configuration audits alone cannot validate runtime authorization behavior.

Why it matters

Without regular penetration testing, organizations cannot verify that portal security controls function correctly when adversaries attempt to exploit them. Configuration audits verify settings exist but cannot validate runtime behavior under attack. Undetected vulnerabilities in portal-exposed components allow unauthorized data access.

Step-by-step fix
  1. Conduct penetration testing before initial portal go-live.
  2. Define ongoing testing cadence based on regulatory requirements and release frequency.
  3. Engage qualified penetration testers with Salesforce Experience Cloud expertise.
  4. Define test scope covering all portal-exposed components.
  5. Conduct testing according to defined cadence and after major portal changes.
  6. Remediate identified vulnerabilities before production deployment.
Validation
  1. Verify penetration testing was conducted before initial portal go-live.
  2. Verify the organization has defined an ongoing testing cadence based on regulatory requirements and change frequency.
  3. Request documentation of the most recent portal penetration test.
  4. Verify testing occurred according to the defined cadence or after major releases.
  5. Confirm test scope included portal-exposed Apex classes and Flows.
  6. Review test report for identified vulnerabilities and remediation status.
  7. Flag as noncompliant if no go-live testing occurred, ongoing testing does not follow the defined cadence, or if high/critical findings remain unremediated.
Effort: ~1hDifficulty: EasyScope: org
SBS-DATA-001 · High
Implement Mechanisms to Detect Regulated Data in Long Text Area Fields
Failed
What's wrong

Respondent attests they do NOT have a mechanism to detect regulated data in Long Text Area fields. Free-text fields are a common, hard-to-find regulated-data hiding spot.

Why it matters

Long Text Area fields often contain unstructured, user-entered information that may include sensitive personal data. Without a detection mechanism, regulated data accumulates in unknown locations—obstructing compliance with GDPR Right to Erasure, CCPA deletion requests, and similar privacy obligations. During a security incident, the inability to identify which fields contain personal information makes it impossible to accurately assess exposure, determine the scope of compromised records, or fulfill breach notification requirements. This governance gap significantly impairs incident response and creates ongoing regulatory liability.

Step-by-step fix
  1. Deploy or configure a tool, script, or process capable of analyzing the contents of LTA fields for regulated data.
  2. Ensure scans run continuously or on a recurring schedule.
  3. Confirm all applicable fields across all objects are included.
  4. Document the scanning process and store execution evidence for audit support.
Validation
  1. Identify all Long Text Area fields using Salesforce metadata.
  2. Determine whether the organization has a mechanism that scans the contents of each LTA field for regulated data.
  3. Confirm that scanning occurs continuously or on a defined recurring schedule.
  4. Review scan logs, detection outputs, or configuration details to verify that the mechanism is operational.
  5. Validate that all LTA fields across all objects are included in scope.
  6. Determine compliance based on whether such a mechanism exists and is functioning.
Effort: ~6hDifficulty: MediumScope: mechanism
SBS-DATA-003 · High
Maintain Tested Backup and Recovery for Salesforce Data and Metadata
Failed
What's wrong

Respondent attests their Salesforce data/metadata backup is NOT tested with a scheduled restore. Untested backups commonly fail when actually needed.

Why it matters

Without reliable backups and tested restoration procedures, organizations cannot recover from accidental deletion, malicious data destruction, configuration corruption, or ransomware-like events. This impairs incident response, business continuity, and the ability to validate data integrity after security events or outages.

Step-by-step fix
  1. Implement or configure a backup solution for Salesforce data and metadata.
  2. Define backup frequency, retention, and storage protections.
  3. Execute and document restoration tests on the defined schedule.
  4. Update recovery procedures based on test results.
Validation
  1. Obtain the documented backup and recovery policy covering Salesforce data and metadata.
  2. Verify that backups are performed on a defined schedule and retained per policy.
  3. Review evidence of a completed restoration test within the defined testing interval.
  4. Confirm that backup storage is protected with appropriate access controls.
Effort: ~6hDifficulty: MediumScope: mechanism
SBS-DATA-004 · High
Require Field History Tracking for Sensitive Fields
Failed
What's wrong

Respondent attests at least one sensitive field is NOT covered by Field History Tracking. Without it, unauthorized changes go undetected.

Why it matters

Without field history tracking on sensitive fields, unauthorized or accidental changes cannot be reliably detected or investigated. This reduces auditability, impairs incident response, and weakens accountability for changes to regulated or high-impact data.

Step-by-step fix
  1. Enable Field History Tracking for all listed sensitive fields.
  2. Update the sensitive-field list as schemas evolve.
  3. Re-verify tracking coverage after changes.
Validation
  1. Obtain the organization’s documented list of sensitive fields and in-scope objects.
  2. Enumerate Field History Tracking settings for those objects.
  3. Verify that each listed sensitive field has Field History Tracking enabled.
  4. Flag any sensitive field without tracking.
Effort: 4–8h per entityDifficulty: HardScope: entity
SBS-DEP-001 · High
Require a Designated Deployment Identity for Metadata Changes
Failed
What's wrong

Respondent attests deployments are spread across individual admin accounts. A designated deployment identity is needed for clean change attribution.

Why it matters

Without a designated deployment identity, organizations cannot reliably attribute production changes—any administrator can deploy metadata, making it impossible to distinguish authorized CI/CD deployments from unauthorized manual changes. This loss of provenance prevents security teams from detecting unauthorized modifications, investigating configuration drift, or determining whether a change was part of an approved release. Attackers or malicious insiders can make direct production changes that blend into legitimate administrative activity, and incident responders cannot reconstruct the timeline of configuration changes during a breach investigation.

Step-by-step fix
  1. Create or identify a dedicated deployment identity.
  2. Reconfigure CI/CD pipelines, release management tooling, and automated deployment scripts to authenticate exclusively with the deployment identity.
  3. Revoke deployment permissions from all human users.
  4. Re-deploy any metadata last deployed by a human user to restore provenance.
Validation
  1. Identify the user account designated as the deployment identity.
  2. Enumerate all recent metadata deployments using tooling such as Deployment Status, Metadata API logs, or audit logs.
  3. Verify that all deployments were executed by the designated deployment identity.
  4. Flag any metadata deployment performed by a human user or non-deployment identity.
Effort: ~1hDifficulty: EasyScope: org
SBS-DEP-002 · High
Establish and Maintain a List of High-Risk Metadata Types Prohibited from Direct Production Editing
Failed
What's wrong

Respondent attests they do NOT maintain a written list of high-risk metadata types prohibited from direct production editing.

Why it matters

Without an explicit list of high-risk metadata types, organizations cannot define or enforce deployment governance boundaries—leaving critical configuration categories (Apex code, authentication settings, outbound connectivity, permissions) open to uncontrolled direct production editing. Security teams cannot distinguish between metadata that requires strict deployment controls and metadata that can be safely edited manually, resulting in inconsistent governance and gaps in change attribution. The absence of a defined list also prevents effective monitoring (SBS-DEP-003), as there is no baseline to compare against when detecting unauthorized changes.

Step-by-step fix
  1. Adopt the SBS baseline list of prohibited direct-in-production metadata changes.
  2. Add any organization-specific items or exceptions as needed.
  3. Remove modify permissions for these metadata types from all human users.
  4. Ensure all future changes to listed metadata types are performed exclusively by the deployment identity.
Validation
  1. Obtain the organization's documented list of high-risk metadata types prohibited from direct production editing.
  2. Confirm that the list, at minimum, includes all SBS baseline categories.
  3. Review the list for any documented exceptions and verify they are formally approved.
  4. Verify that only the deployment identity has modify permissions for metadata types on the list.
Effort: ~3hDifficulty: MediumScope: inventory
SBS-DEP-003 · High
Monitor and Alert on Unauthorized Modifications to High-Risk Metadata
Failed
What's wrong

Respondent attests they do NOT receive alerts on unauthorized high-risk metadata changes in production. Without alerts, unauthorized changes go undetected.

Why it matters

Without monitoring for unauthorized metadata changes, organizations cannot detect when high-risk configuration is modified outside the approved deployment process—allowing malicious changes, accidental drift, or insider threats to persist undetected. Security teams lose the ability to identify unauthorized modifications to authentication settings, permission structures, Apex code, or outbound connectivity until a breach or incident reveals the gap. This impairs detection, investigation, and response capabilities for configuration-related security events, extending attacker dwell time and preventing timely remediation of unauthorized changes.

Step-by-step fix
  1. Implement a monitoring mechanism capable of identifying modifications to high-risk metadata and attributing them to the responsible user. Acceptable approaches include:
    • Manual periodic review of the Salesforce Setup Audit Trail,
    • Exporting audit logs for review,
    • Scheduled API or CLI queries comparing metadata changes,
    • Custom scripts,
    • Vendor-based monitoring tools.
  2. Ensure the monitoring method covers all high-risk metadata types listed in the organization’s defined prohibited-direct-edit list.
  3. Define a repeatable review interval and assign responsibility for conducting the review.
  4. Document the monitoring approach and maintain records of reviews and findings.
Validation
  1. Interview system owners to identify the monitoring method(s) used for detecting changes to high-risk metadata.
  2. Review documentation describing how the monitoring process works—whether manual log review, automated scripts, API queries, CLI workflows, scheduled exports, or vendor tools.
  3. Verify that the monitoring process includes:
    • Coverage of all high-risk metadata types defined by the organization and required by SBS-DEP-002.
    • A review interval appropriate to the organization's change-management expectations (e.g., daily, weekly, or aligned with release cycles).
    • A method for identifying the user who performed each change.
  4. Examine historical monitoring records or logs to confirm the process has been performed consistently.
  5. Flag noncompliance if no monitoring system exists or if the system cannot detect unauthorized human modifications to high-risk metadata.
Effort: ~6hDifficulty: MediumScope: mechanism
SBS-DEP-004 · High
Establish Source-Driven Development Process
Failed
What's wrong

Respondent attests they do NOT have branch protection or CI/CD controls on their Salesforce source repository. Without them, anyone with repo access can push directly to production.

Why it matters

Without a source-driven deployment process, organizations lose the verifiable audit trail that connects production configuration to approved changes—making it impossible to determine what changed, when, by whom, and whether it was authorized. Security teams cannot investigate configuration-related incidents, restore known-good state during outages, or attribute changes during forensic analysis. Manual production changes bypass code review, testing, and approval workflows, enabling unauthorized, accidental, or malicious modifications to security-sensitive settings without accountability or detection.

Step-by-step fix
  1. Establish and maintain a centralized version control repository for Salesforce metadata.
  2. Implement or enforce an automated deployment pipeline that deploys changes exclusively from version control.
  3. Restrict direct production changes for metadata types that support programmatic deployment.
  4. Document and periodically review any required manual production changes for metadata types lacking deployment support.
Validation
  1. Identify the organization’s standard deployment process and designated deployment identity as defined in SBS-CHG-001.
  2. Review recent production metadata changes and their associated deployment records.
  3. Verify that changes deployable through Salesforce’s programmatic deployment mechanisms originated from centralized version control.
  4. Confirm that any manual production changes are limited to metadata types that Salesforce does not support for programmatic deployment.
  5. Flag any manually applied changes that could have been deployed through the source-driven process.
Effort: ~6hDifficulty: MediumScope: mechanism
SBS-DEP-006 · High
Configure Salesforce CLI Connected App with Token Expiration Policies
Failed
What's wrong

Respondent attests the Salesforce CLI Connected App is NOT configured with strict token expiration policies. Long-lived CLI credentials become silent attack vectors when developer machines are compromised.

Why it matters

Salesforce CLI token files stored on local workstations represent a persistent credential exposure risk. If a laptop is stolen, reassigned without proper cleanup, or compromised by malware, attackers can extract token files that provide direct access to Salesforce orgs—including production environments. With the default Connected App configuration, these tokens never expire, giving attackers indefinite access that persists even after the original user's password is changed or their account is deactivated. The attack surface expands with each org a developer authenticates to, as token files accumulate credentials to sandboxes, Dev Hubs, and production orgs. Organizations cannot detect this credential theft through Salesforce audit logs because the attacker authenticates with valid tokens.

Step-by-step fix
  1. Determine whether to use the default "Salesforce CLI" Connected App or create a dedicated Connected App for CLI authentication.
  2. If using the default app:
    • From Setup, navigate to Connected Apps OAuth Usage.
    • Locate "Salesforce CLI" and click Install (if not already installed), then Edit Policies.
    • Set Refresh Token Policy to "Expire refresh token after: 90 Days" (or less).
    • Set Session Policies Timeout Value to "15 minutes" (or less).
  3. If creating a dedicated Connected App:
    • Create a new Connected App with OAuth enabled and appropriate callback URL.
    • Configure refresh token expiry to 90 days or less and access token timeout to 15 minutes or less.
    • Distribute the Consumer Key to developers and require use of --client-id flag.
  4. Communicate to developers that they will need to re-authenticate periodically when refresh tokens expire.
  5. Consider implementing compensating controls to protect locally stored token files, such as:
    • Requiring full disk encryption (FileVault, BitLocker) on developer workstations.
    • Enabling remote wipe capability for managed devices.
    • Including Salesforce CLI token file cleanup in device offboarding procedures.
    • Training developers to run sf org logout --all before returning or transferring devices.
Validation
  1. From Setup, navigate to Connected Apps OAuth Usage (or Apps → Connected Apps → Connected Apps OAuth Usage).
  2. Identify the Connected App(s) used for Salesforce CLI authentication—either the default "Salesforce CLI" app or a custom Connected App.
  3. Review the OAuth Policies for each CLI-related Connected App:
    • Verify that Refresh Token Policy is set to "Expire refresh token after" with a value of 90 days or less.
    • Verify that Session Policies Timeout Value is set to 15 minutes or less.
  4. If a custom Connected App is used, verify that developers are instructed to use the --client-id flag when authenticating.
  5. Flag noncompliance if any CLI-related Connected App has tokens set to never expire or exceeds the maximum allowed durations.
Effort: ~1hDifficulty: EasyScope: org
SBS-FDNS-001 · High
Centralized Security System of Record
Failed
What's wrong

Respondent attests no centralized system of record for Salesforce security governance exists. Compliance posture depends on personal knowledge and is not reliably reconstructible.

Why it matters

A formally maintained system of record is foundational for ensuring repeatable, auditable, and transparent security governance. Without a centralized repository, organizations cannot reliably track required justifications, exceptions, or configuration states, resulting in control failures, loss of historical context, and inconsistent application of security standards. A system of record enables auditors, security engineers, and automation tools to validate compliance objectively.

Step-by-step fix
  1. Establish or designate a centralized system of record capable of storing and maintaining all required SBS documentation.
  2. Populate the system of record with all missing inventories, justifications, and security-relevant artifacts mandated by SBS controls.
  3. Implement a maintenance process to keep the system of record current with ongoing changes to the Salesforce environment.
Validation
  1. Identify and review the organization’s designated system of record for Salesforce security governance.
  2. Verify that the system of record is centrally accessible to authorized personnel and is not dependent on individual personnel knowledge.
  3. Confirm that the system of record includes all artifacts required by SBS controls, including:
    • Documented justifications for elevated permissions or exceptions.
    • Inventories of profiles, permission sets, permission set groups, integrations, API-enabled entities, and other required listings.
    • Recorded security decisions, approvals, and exceptions.
  4. Validate that the system of record is current and reflects the state of the Salesforce environment at the time of audit.
Effort: ~1hDifficulty: EasyScope: org
SBS-FILE-002 · High
Require Passwords on Public Content Links for Sensitive Content
Failed
What's wrong

Respondent attests sensitive Public Content links are not password-protected. Anyone obtaining the link — through interception, accidental sharing, or harvesting — can immediately access the data.

Why it matters

Without a password, anyone who obtains an unexpired Public Content link—through interception, accidental sharing, or link harvesting—can immediately access the associated data. For sensitive content, this creates a direct path to data exposure that requires only link acquisition. Password protection adds an authentication layer that prevents opportunistic access and limits the impact of link compromise, supporting breach containment and regulatory compliance for sensitive data handling.

Step-by-step fix
  1. For each flagged content distribution record, determine the sensitivity classification of the associated content.
  2. For sensitive content, set a password on the ContentDistribution record via the Salesforce UI.
  3. Communicate the password to intended recipients through a separate, secure channel.
  4. Establish organizational policy requiring password protection for all Public Content links to sensitive data.
Validation
  1. Enumerate all ContentDistribution object records via the SOAP/REST API or Apex.
  2. Identify all records where Password is null.
  3. Cross-reference with content classification to identify sensitive content lacking password protection.
  4. Flag any Public Content links to sensitive content without passwords for review.
Effort: ~1hDifficulty: EasyScope: org
SBS-INT-004 · High
Retain API Total Usage Event Logs for 30 Days
Failed
What's wrong

Respondent attests they retain less than 30 days of `ApiTotalUsage` event logs. Without sufficient retention, anomalous API behavior is invisible after the fact.

Why it matters

Without retained API Total Usage logs, organizations lose visibility into REST, SOAP, and Bulk API activity—including user identity, connected app, client IP, resource accessed, and status codes. This materially degrades the ability to detect anomalous API behavior, investigate security incidents, attribute unauthorized access, and determine the scope of potential breaches. The absence of this visibility creates a significant gap in incident detection and response capabilities.

Step-by-step fix
  1. If the organization has only 1-day ApiTotalUsage EventLogFile availability in Salesforce, implement an automated daily export that downloads newly available ApiTotalUsage log files and stores them externally for at least 30 days.
  2. If the organization uses Salesforce-native retention, ensure the configured retention period for Event Log Files is not less than 30 days.
  3. Restrict access to the retained logs (Salesforce-native or external) to authorized personnel and designated service identities.
Validation
  1. Determine whether the organization relies on Salesforce-native retention (Event Monitoring/Shield/Event Monitoring add-on) or an external log store as the system of record for ApiTotalUsage EventLogFile data.
  2. If the organization relies on Salesforce-native retention, verify that EventLogFile data is retained for at least 30 days (for example, confirm the org is entitled to and configured for Event Log File retention that is at least 30 days and can retrieve ApiTotalUsage EventLogFile data within the preceding 30-day window).
  3. If the organization relies on an external log store (including all orgs with only 1-day ApiTotalUsage availability in Salesforce):
  • Verify an automated process exists that retrieves EventLogFile entries where EventType='ApiTotalUsage' and downloads the associated log files at least once every 24 hours.
  • Inspect job schedules/run history and confirm successful executions covering at least the last 30 days (no missed days).
  • From the external log store, retrieve ApiTotalUsage logs for (a) the oldest day in the preceding 30-day window and (b) the most recent day, and confirm both are accessible and attributable to the organization.
  • Verify access to the external log store is restricted to authorized roles and service identities responsible for monitoring and investigations.
Effort: ~6hDifficulty: MediumScope: mechanism
SBS-MON-001 · High
Enable Event Monitoring Log Storage
Failed
What's wrong

Respondent attests Event Monitoring log storage is not enabled for required event types. Salesforce logs cannot be retroactively generated — telemetry is permanently lost.

Why it matters

Failure to ensure Event Monitoring log storage is enabled creates a significant monitoring gap.

Without these logs:

  • Security teams cannot reconstruct user activity during a suspected breach.
  • Data exfiltration events (such as large file downloads or abnormal API usage) may go undetected.
  • Detailed forensic investigations become impossible due to lack of historical telemetry.
  • Organizations may fail to meet regulatory or internal security logging requirements.

Because Salesforce logs cannot be retroactively generated, failing to retain required event logs results in permanent loss of security telemetry.

Step-by-step fix
  1. Navigate to Setup
  2. Open Event Monitoring Settings
  3. Enable Generate Event Log Files
  4. Open Event Manager
  5. For each required event type:
    • Select the event
    • Enable Storage

Organizations should also implement automated export or integration with external monitoring platforms where appropriate.

Validation
  1. Navigate to Setup
  2. Open Event Monitoring Settings
  3. Verify that Generate Event Log Files is enabled
  4. Open Event Manager
  5. Review each event type and verify that Enable Storage is enabled for all log types required by the organization's security monitoring policy
Effort: ~1hDifficulty: EasyScope: org
SBS-MON-002 · High
Retaining Event Logs
Failed
What's wrong

Respondent attests event log retention is not aligned with the organization's required retention period. Forensic data may be unavailable for slow-burn incident reconstruction.

Why it matters

Failure to implement adequate log retention and protection creates significant risks for incident response and regulatory compliance.

If logs are not retained for a sufficient period:

  • Security teams may be unable to reconstruct the timeline of an attack.
  • Indicators of compromise may be permanently lost before detection occurs.
  • The organization may be unable to determine the scope of data exposure.

In addition, attackers who obtain administrative privileges may attempt to delete stored event logs to conceal malicious activity. Without secure external copies stored in a separate system, this could permanently destroy critical forensic evidence.

Because Salesforce logs cannot be retroactively generated, failure to retain logs results in permanent loss of investigative data.

Step-by-step fix
  1. If longer retention is required, purchase Salesforce Event Monitoring or export logs to an external secure storage system.
  2. Implement automated log export or forwarding to a SIEM or centralized monitoring platform.
  3. Restrict the Delete Event Monitoring Data permission to a minimal set of trusted administrators.
  4. Configure monitoring alerts to detect any assignment of this permission or modification of Event Monitoring settings.
  5. Periodically verify that log exports are functioning correctly and that required log types are being successfully captured.
Validation
  1. Obtain the organization's security policy defining required log retention periods.
  2. Review the Salesforce organization's licensing and Event Monitoring configuration to determine the native retention period of logs.
  3. Verify whether Event Monitoring logs are exported to an external system such as a SIEM or centralized logging platform.
  4. Confirm that the combined retention period (Salesforce native retention plus external storage retention) meets or exceeds the required organizational or regulatory retention policy.
  5. Review the SetupAuditTrail log to confirm that changes to Event Monitoring configuration and permissions are monitored, including assignment of the Delete Event Monitoring Data permission.
Effort: ~6hDifficulty: MediumScope: mechanism
SBS-MON-003 · High
Monitor for Suspicious Logins
Failed
What's wrong

Respondent attests no continuous suspicious-login monitoring exists. Compromised credentials provide an undetected foothold; attacker dwell time grows until the breach is discovered another way.

Why it matters

Failure to continuously monitor and rapidly respond to suspicious login activity creates an immediate and severe risk of a full system compromise. This control's absence exposes the organization to three critical, cascading risks:

  1. The primary risk is that a successful suspicious login is the initial foothold for a sophisticated attack. Without automated, anomaly-based alerting, the attacker's presence goes unnoticed, leading to a prolonged "dwell time" (the period between compromise and detection). A longer dwell time allows the attacker to extensively reconnoiter the environment, escalate privileges, and establish persistence. The longer the intruders have to prepare, the harder it is to evict them from the Org.
  2. Compromised credentials grant attackers direct access to sensitive customer data, intellectual property, and internal records. The lack of timely detection enables the attacker to perform mass data exfiltration via API, reporting, or export functionality before the account can be disabled, leading to financial, regulatory, and reputational damage.
  3. An attacker with elevated access can modify or delete critical configuration settings, business-critical data, or code, leading to system downtime, operational failures, and a loss of data integrity.
Step-by-step fix
  1. Take into use a dedicated Security Information and Event Management (SIEM) or a specialized login anomaly detection platform. If a platform is already in place, ensure all Salesforce authentication and event logs, particularly LoginHistory and relevant Event Monitoring logs, are being successfully exported, ingested, and correctly parsed.
  2. Configure the monitoring solution to explicitly analyze login events from all sources, including:
  • Human Users: Both Single Sign-On (SSO) and direct Salesforce credentials.
  • Non-Human/Integration Accounts: Any connected systems, APIs, or automated processes.
  1. Periodically review and test the effectiveness of the deployed detection rules, minimizing false positives and updating baselines as organizational travel patterns or integration accounts change.
  2. Define clear, high-priority, and time-bound Standard Operating Procedures (SOPs) to investigate and respond to all generated suspicious login alerts.
Validation
  1. Verify the organization utilizes a continuous analytics solution that looks for anomalies in Salesforce login data
  2. Confirm that the analytics solution's monitoring scope includes all user logins and all non-human integration or application user logins, regardless of whether the authentication method is Single Sign-On (SSO) or direct Salesforce login.
  3. Confirm that there is a documented, mandatory procedure to periodically (e.g. quarterly) review, test, and update the login anomaly detection rules and baselines to ensure they remain effective and minimize false positives. Review should also include an analysis of investigated suspicious login alerts during the review period – no investigations can mean the rules are not firing.
  4. Review the process and history of investigating and responding to suspicious login alerts.
Effort: ~6hDifficulty: MediumScope: mechanism
SBS-MON-004 · High
Monitor for Suspicious API Activity
Failed
What's wrong

Respondent attests no continuous API anomaly monitoring exists. Post-authentication attacks via stolen tokens or compromised integrations operate without detection.

Why it matters

Failure to monitor API activity creates a critical blind spot for post-authentication attacks, leading to severe and cascading consequences:

  1. The primary risk is that a compromised application, integration account, or stolen session token can be used to perform mass, high-speed data exfiltration.
  2. Attackers can leverage API access to perform unauthorized and undetectable manipulation of the Salesforce Org.
Step-by-step fix
  1. Deploy or build a dedicated analytics solution for granular API log analysis. Ensure all relevant Salesforce Event Monitoring logs for API activity are being successfully exported, ingested, and correctly parsed into the monitoring platform.
  2. Configure the analytics solution to first establish a baseline of normal API behavior for every user and integration account. Following the baseline period, configure and tune high-severity anomaly detection rules to detect deviations, specifically rules that:
  • Alert on an anomalous spike in data retrieval (read/query) volume.
  • Monitor for CRUD (Create, Read, Update, Delete) operations on objects or security settings outside of an entity's established historical interaction baseline.
  • Flag a change from read-heavy activity to an unusual volume of write or delete operations.
  • Alert on API calls originating from a suspicious or atypical geolocation/IP address for the accessing entity.
  1. Define clear, high-priority, and time-bound Standard Operating Procedures (SOPs) to investigate and respond to all generated API anomaly alerts. The procedure must include immediate steps to lock or deactivate the compromised user or application account and revoke stolen access tokens to prevent further data loss or system manipulation.
  2. Establish a mandatory, recurring (e.g. quarterly) review process to:
  • Periodically test the effectiveness of the deployed detection rules using simulated incidents (e.g. table-top exercises).
  • Review and update the established baselines as new integrations are deployed or organizational usage patterns change.
Validation
  1. Verify the organization utilizes a continuous analytics solution (e.g. SIEM, log aggregation platform) that is verifiably integrated with and ingesting all required API activity data, specifically the detailed Event Monitoring logs from Salesforce.
  2. Confirm that the monitoring scope includes all API access, including user-initiated, non-human/integration accounts, and all API methods (e.g. REST, SOAP, Bulk). Verify the logs ingested are granular enough to track specific API calls (CRUD operations) against specific objects.
  3. Request evidence (e.g. SIEM rule configurations, simulated alert outputs) to verify that the following specific, high-severity anomaly detection rules are active and tuned:
  • Mass Data Exfiltration: Alerts on an anomalous spike in data retrieval volume for a specific user or application, especially for sensitive objects.
  • Unauthorized Object Access: Alerts on a user or integration account attempting CRUD operations on objects they have no historical precedent for interacting with (e.g. Sales user accessing Security Settings).
  • Suspicious Action Shift: Alerts on a shift from a baseline of read-only operations to a high volume of write or delete operations by a user or application.
  • Suspicious Network Origins: Alerts on API calls originating from a suspicious or high-risk geolocation/IP address that is unusual for the accessing entity.
  1. Process Review - Triage & Response: Examine the documented procedures for triage, investigation, and response to API anomaly alerts. Review a sample of anomaly findings and the actions taken on them for e.g. the past 6 months, to confirm that alerts are being acted upon within the defined parameter.
Effort: ~6hDifficulty: MediumScope: mechanism
SBS-OAUTH-003 · High
Add Criticality Classification of OAuth-Enabled Connected Apps
Failed
What's wrong

Respondent attests they do NOT maintain a criticality classification for OAuth-enabled Connected Apps. Without it, vendor-risk reviews cannot be prioritized.

Why it matters

Without a complete inventory and criticality classification, organizations lose visibility into their third-party integration landscape—preventing effective risk assessment, prioritization of security controls, and governance of external system connectivity. Security teams cannot identify which integrations access sensitive data, scope the impact of a vendor compromise, or respond effectively to incidents involving Connected Apps. This impairs detection, investigation, and response capabilities for integration-related security events.

Step-by-step fix
  1. Add any missing OAuth-enabled Connected Apps to the system of record.
  2. Document and assign a vendor criticality rating to each Connected App based on operational importance and data sensitivity.
  3. Implement a recurring process to synchronize Connected App changes with the system of record.
Validation
  1. Retrieve a list of all Connected Apps with active OAuth configurations from Salesforce Setup.
  2. Retrieve the organization's authoritative system of record for integration and vendor management.
  3. Compare the Salesforce Connected App list to the system of record and confirm every OAuth-enabled Connected App appears in the inventory.
  4. Verify each listed Connected App has an assigned vendor criticality rating documented in the system of record.
  5. Flag any apps missing from the inventory or lacking a documented criticality rating as noncompliant.
Effort: 4–8h per entityDifficulty: HardScope: entity
SBS-SECCONF-001 · High
Establish a Salesforce Health Check Baseline
Failed
What's wrong

Respondent attests they do NOT have a written Health Check baseline. Without one, configuration drift is invisible.

Why it matters

Without a defined Health Check baseline, organizations have no authoritative reference for what their security configuration should be—making it impossible to detect drift, evaluate deviations, or determine whether current settings reflect intentional decisions or accumulated neglect. Security teams cannot assess configuration-related risk, investigate whether settings were deliberately changed, or demonstrate compliance with security requirements. The absence of a baseline also prevents effective use of Health Check deviation monitoring (SBS-SECCONF-002), as there is no standard to measure against.

Step-by-step fix
  1. Create or select a Health Check baseline (Salesforce default, SBS-recommended baseline, or a custom-defined XML).
  2. Upload the baseline XML into Setup → Health Check.
  3. Document ownership of the baseline and establish a process for periodic review and updates.
  4. Communicate the baseline's purpose and implications to system owners and security stakeholders.
Validation
  1. Navigate to Setup → Health Check and confirm that a baseline template is uploaded and active.
  2. Review the XML baseline directly (via UI or API) to verify that the baseline exists and contains intentional values rather than defaults left unexamined.
  3. Interview administrators to confirm the baseline was deliberately chosen or customized and is understood as the organization's configuration standard.
  4. If the organization lacks a baseline, flag the control as noncompliant.
Effort: ~1hDifficulty: EasyScope: org
SBS-SECCONF-002 · High
Review and Remediate Salesforce Health Check Deviations
Failed
What's wrong

Respondent attests Health Check results are NOT regularly reviewed and acted on.

Why it matters

Without periodic review and remediation of Health Check deviations, configuration drift accumulates undetected—weakening security posture over time as settings diverge from the intended baseline. Security teams cannot identify when critical platform settings (authentication, session management, content security) have been changed or misconfigured, preventing timely response to emerging vulnerabilities. Unaddressed deviations may persist indefinitely, creating exploitable gaps that remain invisible until a breach or audit reveals the exposure.

Step-by-step fix
  1. Establish a recurring review process using any reliable method, including:
    • Salesforce Health Check UI,
    • API exports,
    • CLI automation,
    • Scheduled scripts,
    • Vendor tooling.
  2. Assign ownership for conducting the review and maintaining documentation.
  3. Review current deviations and either remediate them or document exceptions.
  4. Implement tracking to ensure deviations are remediated or re-reviewed in future cycles.
Validation
  1. Interview system owners to identify the established Health Check review process and review interval (e.g., monthly, quarterly).
  2. Examine evidence of recent Health Check reviews, such as documented review artifacts, exported reports, tickets, changes, or exception records.
  3. Verify that deviations were:
    • Remediated within the review window, or
    • Documented as exceptions with clear justification and approval.
  4. Confirm that the review process is repeatable, assigned to an owner, and actually followed.
  5. Flag noncompliance if:
    • No review process exists,
    • No review evidence can be produced, or
    • Deviations occur without remediation or exception documentation.
Effort: ~6hDifficulty: MediumScope: mechanism
SBS-ACS-010 · Moderate
Enforce Periodic Access Review and Recertification
Failed
What's wrong

Respondent attests user access is NOT formally reviewed and re-approved at least annually with documented results. Recertification is a SOC 2 CC6.2 expectation.

Why it matters

Access review is the foundational control for preventing privilege creep, detecting unauthorized access, and remediating execessive permissions. Without periodic review, users accumulate access over time -- permissions granted for past roles remain after job changes, access added for temporary projects becomes permanent, and no formal mechanism ensures access remains least-privilege. Periodic formal recertification by business stakeholders ensures that access governance remains aligned with organizational reality. Documentation of review creates an audit trail and ensures accountability. Regular remediation prevents drift and maintains the integrity of the permission set model defined in SBS-ACS-001.

Step-by-step fix
  1. If no access review process exists, establish documented policy including frequency, reviewers, scope, and remediation SLAs.
  2. Conduct an initial comprehensive access review of all active users, with business unit or department ownership of sign-off.
  3. Identify and remediate all access determined excessive, unauthorized, or inappropriate during the initial review.
  4. Implement a system of record (spreadsheet, governance tool, or integrated platform) to track reviews, findings, and remediation.
  5. Schedule recurring access reviews at minimum annual frequency, with quarterly reviews for sensitive roles or high-risk data.
  6. Document the review process, including templates, stakeholder roles, and escalation procedures.
  7. Establish accountability for reviewers and tie review completion to performance management or audit requirements.
Validation
  1. Understand the organization's dcoumented access review policy, including:
    • Defined frequency and review cycle
    • Designated reviewers and escalation path
    • Intended coverage scope and access types included
    • Expected remediation timeframe for findings
    • System of record for tracking review activity and findings.
  2. Assess the recency and regularity of access review execution. Locate the most recent completed acccess review cycle and evaluate whether it aligns with the organization's stated review frequency.
  3. Examine a representative sample of access review documentation to asssess consistency of execution:
    • Evidence of review and approval by the designated stakeholder
    • Documentation of review date and scope
    • Any findings, exceptions, or questions raised during the review
    • Appropriateness of sample size relative to the organization's user population and complexity
  4. For any access identified as excessive, unauthorized, or not recertified:
    • Assess whether the finding was documented
    • Evaluate what remediation action was taken or whether exceptions were formally approved
    • Compare remediation timing against the defined SLA
  5. Assess whether the organization maintains a traceable system of record that documents:
    • Who reviewed what access
    • When the review occurred
    • What was approved or questioned
    • What remediation was required and its completion status
  6. Evaluate whether the access review process adequately addresses the organization's primary access constructs, which may include the following types of assignment:
    • User profiles
    • Permission sets
    • Permission set groups
    • Role and role hierarchies
    • Public group
    • Queues
    • Sales Territories
    • Delegated administration or elevated permissions
Effort: ~6hDifficulty: MediumScope: mechanism
SBS-AUTH-002 · Moderate
Govern and Document All Users Permitted to Bypass Single Sign-On
Failed
What's wrong

Respondent attests there are users permitted to bypass single sign-on without documented business reasons. SSO-bypass accounts are the canonical break-glass attack target.

Why it matters

Users permitted to bypass SSO represent exceptions to centralized identity governance. Without formal documentation and approval, these accounts can proliferate unnoticed—reducing visibility into access patterns and undermining audit readiness. However, this control provides assurance and governance rather than establishing a security boundary. Undocumented exceptions increase operational risk and reduce audit readiness but require credential compromise for direct security impact.

Step-by-step fix
  1. Create or update a formal inventory documenting all SSO-bypass users with their business justification, owner, and approval date.
  2. For any undocumented or unjustified users: assign the "Is Single Sign-On Enabled" permission via their profile or permission sets to remove SSO-bypass capability.
  3. Ensure all documented exceptions adhere to least-privilege access and strong authentication controls.
  4. Establish periodic (e.g., quarterly) review of all SSO-bypass accounts.
Validation
  1. Query all user records to identify users who do not have the "Is Single Sign-On Enabled" (PermissionsIsSsoEnabled) permission assigned through their profile or permission sets.
  2. Verify each identified user appears in the approved system-of-record inventory with documented business justification, owner, and approval.
  3. Confirm each exception is authorized for administrative or break-glass purposes only.
  4. Validate that these accounts follow strong local authentication controls (e.g., strong password policies, MFA if applicable).
  5. Flag any user without documented approval.
  6. (Optional) Download API Total Usage logs (EventLogFile - ApiTotalUsage, available in free tier of Event Monitoring) to monitor SSO bypass account activity:
    • Filter API activity by users identified as SSO bypass accounts.
    • Review frequency and timing of API calls to verify usage aligns with documented break-glass purposes.
    • Flag any SSO bypass accounts with regular or unexpected API activity for review against documented justifications.
Effort: 4–8h per entityDifficulty: HardScope: entity
SBS-AUTH-003 · Moderate
Prohibit Broad or Unrestricted Profile Login IP Ranges
Failed
What's wrong

Respondent attests at least one profile has an unrestricted login IP range (e.g., `0.0.0.0/0`) that defeats the purpose of IP allow-listing.

Why it matters

Overly broad login IP ranges effectively disable network-based access controls, allowing authentication from any location on the internet. However, exploitation requires credentials to be compromised first—this control provides defense-in-depth rather than establishing a primary security boundary. When authentication controls (SBS-AUTH-001) are enforced, IP restrictions serve as an additional layer that limits the blast radius of credential compromise.

Step-by-step fix
  1. Remove any profile login IP ranges that effectively grant unrestricted global access.
  2. Replace them with IP ranges that correspond to approved corporate networks, office locations, VPN ingress points, or other authorized infrastructure.
  3. Validate that updated network restrictions do not block legitimate access paths and that users can authenticate through sanctioned networks.
  4. Establish an internal governance process to review and approve all future additions of profile login IP ranges.
Validation
  1. Retrieve all profile login IP ranges via Setup → Profiles → Login IP Ranges or by querying the Profile metadata (loginIpRanges field) using the Metadata API.
  2. For each profile, enumerate all configured login IP ranges.
  3. Identify any ranges that:
    • Cover the entire IPv4 space, or
    • Represent effectively unrestricted access (e.g., 0.0.0.0–255.255.255.255, 1.1.1.1–255.255.255.255, or similar patterns).
  4. Confirm that all IP ranges align with organizational security policy and defined network boundaries.
  5. Flag any profile with an impermissible or overly broad range.
  6. Download API Total Usage logs (EventLogFile - ApiTotalUsage, available in free tier of Event Monitoring) to validate IP restrictions are effective:
    • Extract unique CLIENT_IP values from recent API activity.
    • Compare against documented approved organizational network ranges.
    • Identify any new or unexpected IP addresses making API calls.
    • Cross-reference unusual IPs with profile assignments to identify potential policy gaps.
Effort: 4–8h per entityDifficulty: HardScope: entity
SBS-CODE-001 · Moderate
Mandatory Peer Review for Salesforce Code Changes
Failed
What's wrong

Respondent attests code changes do NOT all go through mandatory peer review before reaching production.

Why it matters

Without mandatory peer review, a single developer—whether compromised, malicious, or simply mistaken—can introduce insecure or flawed code directly into the deployment pipeline. This eliminates shared oversight of changes to sensitive business logic, allowing vulnerabilities, backdoors, or destructive changes to reach production without independent human verification before deployment.

Step-by-step fix
  1. Update branch protection rules to require peer review before merge.
  2. Train developers on the peer review workflow, including security checks such as identifying sensitive data in logging statements.
  3. Block direct commits to production-bound branches.
Validation
  1. Inspect source control settings to confirm merge rules require peer review on production-bound branches.
  2. Review merge history or representative pull requests to verify peer approvals were recorded.
  3. Confirm that peer review processes include security checks such as verifying logging statements do not expose sensitive data.
  4. Flag any repositories or branches that allow merging without peer approval.
Effort: ~6hDifficulty: MediumScope: mechanism
SBS-CODE-002 · Moderate
Pre-Merge Static Code Analysis for Apex and LWC
Failed
What's wrong

Respondent attests pre-merge static security analysis is NOT in place. SOQL injection and other code-level issues commonly slip through without it.

Why it matters

Without enforced static code analysis, known vulnerability patterns in Apex and LWC—such as SOQL injection, insecure data exposure, and improper access control—may enter production undetected. This increases the likelihood of exploitable flaws persisting in deployed code, creating potential vectors for data breaches or unauthorized access that human reviewers may not catch.

Step-by-step fix
  1. Integrate static code analysis into the CI/CD pipeline for all production-bound branches.
  2. Enable Apex and LWC security rulesets within the scanning tool.
  3. Configure pipelines to block merges when static analysis fails.
Validation
  1. Inspect CI/CD pipeline configuration to confirm a static code analysis step runs before merges.
  2. Verify the SAST tool includes security rulesets for Apex and Lightning Web Components.
  3. Review pipeline logs from representative merges to ensure scans executed and passed.
  4. Flag pipelines or branches missing enforced pre-merge scanning.
Effort: ~6hDifficulty: MediumScope: mechanism
SBS-FILE-001 · Moderate
Require Expiry Dates on Public Content Links
Failed
What's wrong

Respondent attests Public Content links lack appropriate expiry dates. Permanent links extend exposure indefinitely if leaked, intercepted, or accidentally re-shared.

Why it matters

Without an expiry date, Public Content links remain permanently accessible, extending the window of potential exposure indefinitely. While the link itself must be obtained by an unauthorized party for access to occur, perpetually valid links increase the cumulative risk of data exposure through link leakage, sharing, or discovery. Time-bounded links reduce the blast radius of any single link compromise and support data lifecycle governance.

Step-by-step fix
  1. For each flagged content distribution record, determine the sensitivity classification of the associated content.
  2. Set an appropriate expiry date on the ContentDistribution object based on content classification.
  3. Establish organizational policy defining maximum link lifetimes by data classification.
Validation
  1. Enumerate all ContentDistribution object records via the SOAP/REST API or Apex.
  2. Identify all records where PreferencesExpires = false.
  3. Flag any Public Content links without expiry dates for review.
Effort: ~1hDifficulty: EasyScope: org
SBS-FILE-003 · Moderate
Periodic Review and Cleanup of Public Content Links
Failed
What's wrong

Respondent attests no recurring review of active Public Content links exists. Legacy and accidentally-shared links accumulate as undetected, persistent exposure.

Why it matters

Without periodic review, Public Content links accumulate over time—including legacy links created before security policies were established, links that have outlived their business purpose, and links created through accidental or unauthorized sharing. These forgotten links represent persistent exposure that may go undetected indefinitely. While this control does not prevent initial link creation issues, it provides a governance mechanism to identify and remediate accumulated risk, supporting defense-in-depth and reducing the organization's overall exposure footprint.

Step-by-step fix
  1. Establish a documented process for periodic review of all ContentDistribution records.
  2. Define a review cadence appropriate to organizational risk tolerance (quarterly recommended as a baseline).
  3. Create a scanning mechanism (script, report, or tool) to enumerate all active Public Content links.
  4. Define review criteria to identify links requiring remediation: missing expiry dates, missing passwords on sensitive content, links older than a defined threshold, or links to content no longer requiring external sharing.
  5. Assign ownership for the review process and remediation actions.
  6. Maintain records of each review cycle for audit purposes.
Validation
  1. Verify the organization has a documented process for periodic Public Content link review.
  2. Confirm the review cadence is defined (e.g., quarterly, monthly) and appropriate for the organization's risk profile.
  3. Obtain evidence of recent review execution (e.g., scan results, remediation records, review meeting notes).
  4. Verify that reviews include all active ContentDistribution records.
  5. Confirm that identified issues are tracked through remediation or deletion.
  6. Flag organizations without a documented review process or evidence of recent execution.
Effort: ~1hDifficulty: EasyScope: org
SBS-INT-001 · Moderate
Enforce Governance of Browser Extensions Accessing Salesforce
Failed
What's wrong

Respondent attests browser extensions interacting with Salesforce are NOT centrally governed. User-installed extensions can read every page and exfiltrate data silently.

Why it matters

Without centralized governance over browser extensions, malicious or cloned extensions—increasingly common with AI-generated code—can harvest session tokens, exfiltrate data, and execute unauthorized operations within authenticated Salesforce sessions. However, this control provides governance and defense-in-depth rather than establishing a Salesforce-native security boundary: exploitation requires a malicious extension to be installed and an authenticated session to exist. Other controls (SSO, session management) still provide protection, and this governance mechanism operates outside Salesforce via MDM or browser management.

Step-by-step fix
  1. Implement a centrally managed browser or device management solution capable of enforcing extension restrictions (e.g., Chrome Browser Cloud Management, Intune, Jamf, or GPO-based controls).
  2. Define and apply an allow-list or blocklist policy governing which extensions are permitted to interact with Salesforce.
  3. Remove or disable any unapproved browser extensions from managed devices.
  4. Apply enforcement policies to all corporate-managed devices accessing Salesforce.
Validation
  1. Request evidence of a browser-extension governance mechanism applied to user devices (e.g., Chrome Browser Cloud Management, Intune configuration profile, Jamf configuration profile, Active Directory GPO, or equivalent).
  2. Require a screenshot, exported policy file, or screen capture demonstrating that extension controls are active and enforceable (e.g., an allow-list or blocklist configuration for Chrome extensions).
  3. Verify that the mechanism explicitly restricts installation or execution of unapproved extensions that can access Salesforce domains.
  4. Flag the organization as noncompliant if no enforceable governance mechanism exists or if extension governance is based solely on policy, awareness, or voluntary user behavior.
  5. Download API Total Usage logs (EventLogFile - ApiTotalUsage, available in free tier of Event Monitoring) and analyze for indicators of unauthorized browser extension activity:
    • Review USER_AGENT field for patterns indicating browser extensions (e.g., extension identifiers, non-standard user agents).
    • Identify API call patterns characteristic of auto-refresh extensions (e.g., Inspector Reloader) such as regular-interval repeated requests.
    • Flag any anomalous patterns for investigation against approved extension inventory.
Effort: ~6hDifficulty: MediumScope: mechanism
SBS-INT-003 · Moderate
Inventory and Justification of Named Credentials
Failed
What's wrong

Respondent attests they do NOT maintain an inventory + justification list for Named Credentials. Forgotten Named Credentials with valid stored secrets are a hidden third-party access surface.

Why it matters

Without a documented inventory and justification for Named Credentials, undocumented or unjustified configurations may expose the organization to data leakage, unauthorized integrations, or reliance on insecure or untrusted endpoints. However, this control provides governance documentation rather than detection or prevention capability: it supports audit readiness and informed decision-making about authenticated external connections, but other controls are required to detect or prevent actual misuse of approved credentials.

Step-by-step fix
  1. Add any undocumented Named Credentials to the system of record.
  2. Document a valid business justification for each Named Credential.
  3. Remove, disable, or reconfigure any Named Credentials that cannot be justified or that reference untrusted endpoints.
  4. Establish a recurring reconciliation process to ensure Named Credentials remain fully inventoried and justified.
Validation
  1. Enumerate all Named Credentials using Salesforce Setup, Metadata API, Tooling API, or Connect REST API.
  2. Retrieve the organization’s system of record for approved external endpoints and integration credentials.
  3. Compare the Salesforce list to the system of record to confirm all Named Credentials are documented.
  4. Verify that each documented Named Credential includes:
    • The external endpoint URL
    • The authentication type (named principal or per-user)
    • The business justification for the integration
  5. Flag any Named Credentials missing from the inventory or lacking justification as noncompliant.
Effort: 4–8h per entityDifficulty: HardScope: entity
SBS-MON-005 · Moderate
Monitor API Usage Against Limits
Failed
What's wrong

Respondent attests no continuous API limit monitoring with proactive alerting exists. A runaway integration or compromised account can exhaust the daily quota and break core business processes before being detected.

Why it matters

Failure to stay within API limits creates an immediate and severe risk to the availability of critical business operations. Exceeding the rolling 24-hour API quota blocks all further inbound requests, which effectively disables core integrations (e.g. with ERP), leading to catastrophic failures of vital business processes like order placement and resulting in direct financial loss.

Step-by-step fix
  1. Configure Salesforce's native "API Usage Notifications" in Setup and, more critically, integrate API consumption data into an external monitoring solution. Configure high-priority alerts to trigger at a proactive utilization threshold (e.g. 80-90%) before the hard limit is reached.
  2. Establish a formal, high-priority Standard Operating Procedure (SOP) to be executed upon a high-utilization alert. This SOP must clearly define and be rehearsed to:
  • Immediately identify the user, application, or integration responsible for the spike in API consumption.
  • Mandate the immediate, temporary disabling of non-critical, high-volume integrations to conserve remaining quota.
  • Outline the process for proactively requesting a temporary or permanent API quota increase from Salesforce.
  1. Following any API limit breach or near-breach, mandate a post-incident review to analyze the root cause. The outcome should be a plan to re-engineer high-volume integrations for more efficient usage (e.g. using Bulk API or other low-volume methods) to prevent recurrence.
  2. Continuous Monitoring Maintenance: Establish a mandatory, recurring (e.g. quarterly) review process to confirm the ongoing integrity and functionality of the API usage data export to the monitoring platform.
Validation
  1. Review the organization's daily API limit and current 24-hour usage in the Salesforce System Overview in Setup.
  2. Verify that a continuous monitoring solution (at minimum Salesforce's native API Usage Notifications) is implemented and active.
  3. Confirm that the monitoring solution is configured to trigger immediate, high-priority alerts at a proactive threshold (e.g. 80% or 90% utilization) before the hard limit is breached. Request evidence (e.g. rule configurations, test alerts) to support this.
  4. Examine the formal, documented incident response plan for API limit breaches. Verify the plan clearly defines:
  • The immediate triage steps to identify the runaway process or application causing the spike.
  • The mandatory, temporary mitigation steps (e.g. disabling non-critical integrations).
  • The process for escalating and requesting a temporary or permanent API quota increase from Salesforce.
  1. Review the history (e.g. for the past 12 months) of any API limit breach or near-breach incidents. Confirm that the response procedure was followed and was effective in mitigating the service disruption.
Effort: ~6hDifficulty: MediumScope: mechanism
SBS-OAUTH-004 · Moderate
Due Diligence Documentation for High-Risk Connected App Vendors
Failed
What's wrong

Respondent attests they do NOT maintain security documentation for high-risk Connected App vendors. Vendor due diligence gaps are SOC 2 and ISO supplier-management findings waiting to happen.

Why it matters

Without documented due diligence for high-risk vendors, organizations may onboard integrations without understanding the vendor's security posture, data handling practices, or contractual obligations. This increases the likelihood of undiscovered risks but does not directly enable unauthorized access—other controls (SBS-OAUTH-001, SBS-OAUTH-002) still govern the technical security boundary. Missing documentation primarily impacts audit readiness, risk assessment accuracy, and the organization's ability to make informed decisions about vendor relationships.

Step-by-step fix
  1. Collect and store all required documentation for each high-risk vendor.
  2. Where documentation does not exist, record this absence in the vendor assessment.
  3. Update the vendor management process to ensure ongoing due diligence for high-risk vendors.
Validation
  1. Retrieve the list of Connected App vendors classified as high-risk from the organization’s system of record.
  2. For each high-risk vendor, verify that the following documents, where available, are stored in the designated repository:
    • Terms of use
    • Privacy policy
    • Trust center or security overview
    • Published information security guidelines
  3. Confirm that any missing documentation is explicitly recorded as unavailable in the vendor assessment.
  4. Flag any high-risk vendor lacking required documentation or missing explicit acknowledgment of unavailable documents as noncompliant.
Effort: 4–8h per entityDifficulty: HardScope: entity
HelloMavens

Your roadmap is clear. Implementation is where most security efforts stall.

Our cofounders led product and engineering teams at GrubHub and Rocket, and led all of product and engineering at National Debt Relief — and have built and audited Salesforce environments at every scale, from Series A startups to Fortune 500 scale enterprises.

  • Walk through your top remediation priorities together.
  • Get a fixed-scope plan to close the highest-impact gaps first.
  • Optional: bring us in to ship the fixes and keep them from regressing.
Book a 30-minute remediation review

No obligation. We'll review your top three findings with you and tell you whether HelloMavens is the right fit.

Appendix

Methodology, disclaimer, and glossary

Methodology

Each control was evaluated against the Security Benchmark for Salesforce vd4304e18e6f3747b04b1a7097b3d03a6036e5a3f. Controls scored as pass / fail / inconclusive / not applicable. Category scores are weighted by risk tier (Critical 5, High 3, Moderate 2). Overall score is a weighted average across categories proportional to each category's share of Critical-tier and High-tier controls. Inconclusive and not-applicable controls are excluded from denominators. If any Critical-tier control failed, the overall grade is capped at C regardless of other scores.

Engine version 0.0.0-alpha.10. SBS vd4304e18e6f3747b04b1a7097b3d03a6036e5a3f. Disclaimer version 2026-05-02-placeholder-1.

Disclaimer

The HelloMavens Salesforce Security Audit produces a directional security assessment based on questionnaire responses you provide.

This report is not a substitute for a formal security audit, penetration test, or compliance certification.

HelloMavens LLC makes no warranty, express or implied, regarding the completeness, accuracy, or fitness for any particular purpose of this report.

You confirm that you are authorized to submit this information about your organization.

HelloMavens LLC will process your responses solely to generate this report and will not retain raw scan data after report generation. Aggregate, anonymized scoring data may be retained for benchmarking.

Any remediation actions you take based on this report are at your own risk and discretion.

Glossary
SBS
Security Benchmark for Salesforce — an open standard of audit-ready controls maintained at github.com/Salesforce-Security-Benchmark.
OWASP Top 10
A standard awareness document for developers and web application security maintained by the Open Web Application Security Project. The 2021 edition is referenced throughout this report.
HIPAA Security Rule
U.S. federal regulation governing security standards for protected health information.
SOC 2
Service Organization Control 2 — an audit framework focused on the AICPA Trust Services Criteria (security, availability, processing integrity, confidentiality, privacy).
Inconclusive
A control that could not be evaluated from the evidence provided (e.g. you answered "I don't know"). Inconclusive controls are excluded from scoring denominators per spec §8.
Critical-fail cap
If any Critical-tier control fails, the overall risk grade is capped at C regardless of other scores. Highlights catastrophic risk areas.
Consultant scan
An evidence-based audit run by a security consultant against your Salesforce org via the upcoming sbs-scan CLI. Resolves inconclusive controls with high-confidence findings.
Where you're already strong

What these passing controls protect against, in plain language.

Access controls

  • SBS-ACS-009 Implement Compensating Controls for Privileged Non-Human Identities

    Privileged non-human identities have compensating controls layered on top of credentials, so secret leakage isn't a single point of failure for your most sensitive integrations.

  • SBS-ACS-012 Classify Users for Login Hours Restrictions

    Privileged accounts are classified for login-hours restrictions, adding a defense-in-depth layer — so even a compromised credential can't be exploited freely during off-hours when detection is thin.

Data protection

  • SBS-DATA-002 Maintain an Inventory of Long Text Area Fields Containing Regulated Data

    Your inventory of regulated-data fields is current, so you know exactly where personal data lives and can enforce protection, retention, and deletion obligations without scrambling to find it.

Integrations

  • SBS-INT-002 Inventory and Justification of Remote Site Settings

    Your Remote Site Settings inventory is documented with justification, so unvetted endpoints can't be authorized for Apex callouts and you know exactly which external services your code talks to.