HelloMavens
HelloMavens

by HelloMavens

Salesforce Security Review

A security assessment of your Salesforce environment against the Security Benchmark for Salesforce (SBS).

Prepared for
CompanyNorthwind Health Systems
Emailcompliance@northwindhealth.example
SizeMid-market
IndustryHealthcare
Regulations in scopeHIPAA, SOC 2
Generated May 6, 2026 · SBS vd4304e18e6f3747b04b1a7097b3d03a6036e5a3f · Engine v0.0.0-alpha.10

Section 1

Executive summary

A one-page read on where you stand.

C
Overall risk grade74.0 / 100Mixed · 4 critical controls failed — grade capped at C

See the Appendix for what the passes mean and why some controls weren't applicable →

Risk tier breakdown

Top critical findings

  1. SBS-ACS-003 Respondent attests the `Approve Uninstalled Connected Apps` permission is NOT properly restricted with documented justification per holder. End-users with this permission can self-approve OAuth grants.
  2. SBS-CPORTAL-001 Respondent attests they cannot confirm portal Apex methods are free of parameter-based record access. This is the canonical IDOR (insecure direct object reference) vector for portals.
  3. SBS-CPORTAL-002 Respondent attests guest users are NOT properly restricted to login/signup-only access. Guest user breaches are the most public class of Salesforce data leaks.

Risk by business impact

  • Data breach exposure: 4 access / authentication / data-protection controls failed.
  • Compliance gap: 3 categories scored below 65 / 100 — likely weak spots in audit conversations.
  • Operational risk: 9.3% of controls couldn't be evaluated from your responses. A consultant scan would resolve them with evidence-based scoring.

Section 2

Category scorecards

One card per SBS category. Each shows the 0–100 score plus the OWASP and regulation citations relevant to that category.

Access controls
78
10 pass 2 fail
OWASP: A01:2021-Broken Access Control, A04:2021-Insecure Design, A05:2021-Security MisconfigurationRegs: 164.308(a)(4), CC6.1, CC6.3, A.5.15 +7
Authentication
86
3 pass 1 fail
OWASP: A07:2021-Identification and Authentication Failures, A01:2021-Broken Access Control, A05:2021-Security MisconfigurationRegs: 164.312(d), CC6.1, CC6.7, A.5.16 +7
Code security
100
1 pass 0 fail 3 inconclusive
OWASP: A06:2021-Vulnerable and Outdated Components, A08:2021-Software and Data Integrity Failures, A03:2021-InjectionRegs: 164.308(a)(1)(ii)(A), CC7.1, CC8.1, A.8.25 +14
Customer portals
0
0 pass 4 fail 1 inconclusive
OWASP: A01:2021-Broken Access Control, A04:2021-Insecure Design, A05:2021-Security MisconfigurationRegs: 164.308(a)(4), 164.312(a), CC6.1, CC6.6 +12
Data protection
82
3 pass 1 fail
OWASP: A02:2021-Cryptographic Failures, A04:2021-Insecure Design, A08:2021-Software and Data Integrity FailuresRegs: 164.308(a)(1)(ii)(A), 164.312(b), CC6.1, CC6.7 +18
Deployments
100
6 pass 0 fail
OWASP: A01:2021-Broken Access Control, A05:2021-Security Misconfiguration, A08:2021-Software and Data Integrity FailuresRegs: 164.308(a)(8), CC8.1, A.5.15, A.8.32 +16
FDNS
100
1 pass 0 fail
OWASP: A04:2021-Insecure Design, A05:2021-Security MisconfigurationRegs: 164.308(a)(8), 164.316(b), CC1.4, CC2.2 +5
FILE
71
2 pass 1 fail
OWASP: A01:2021-Broken Access Control, A04:2021-Insecure Design, A07:2021-Identification and Authentication FailuresRegs: CC6.1, CC6.6, A.5.10, A.5.13 +9
Integrations
44
2 pass 2 fail
OWASP: A03:2021-Injection, A04:2021-Insecure Design, A05:2021-Security MisconfigurationRegs: CC6.6, CC6.7, A.5.19, A.8.20 +8
MON
50
2 pass 2 fail 1 inconclusive
OWASP: A09:2021-Security Logging and Monitoring Failures, A05:2021-Security MisconfigurationRegs: 164.308(a)(1)(ii)(D), 164.312(b), CC7.2, CC7.3 +7
Connected apps & OAuth
100
4 pass 0 fail
OWASP: A01:2021-Broken Access Control, A05:2021-Security Misconfiguration, A08:2021-Software and Data Integrity FailuresRegs: 164.308(a)(4), CC6.1, CC6.6, A.5.15 +9
Security configuration
100
2 pass 0 fail
OWASP: A05:2021-Security MisconfigurationRegs: 164.308(a)(8), CC7.1, A.5.36, A.8.9 +1

Section 3

Remediation detail

Every failed control with what to fix, why it matters, how to fix it, and how to verify the fix worked.

SBS-ACS-003 · Critical
Documented Justification for `Approve Uninstalled Connected Apps` Permission
Failed
What's wrong

Respondent attests the `Approve Uninstalled Connected Apps` permission is NOT properly restricted with documented justification per holder. End-users with this permission can self-approve OAuth grants.

Why it matters

The Approve Uninstalled Connected Apps permission allows users to bypass Connected App usage restrictions and self-authorize any OAuth application without administrator approval. This establishes an uncontrolled security boundary: users with this permission can grant external applications access to Salesforce data without oversight, enabling data exfiltration, unauthorized integrations, and potential account compromise. Unlike other permissions that require additional failures to exploit, this permission directly enables unauthorized third-party access the moment it is misassigned—making it a primary security boundary that must be tightly controlled.

Step-by-step fix
  1. Remove the Approve Uninstalled Connected Apps permission from any profile, permission set, or permission set group that lacks a documented justification or is assigned to end-users.
  2. For any authorization that legitimately requires this permission (e.g., administrators or developers testing connected apps), add or update the rationale in the system of record to clearly justify the need and identify the specific role or use case.
  3. Ensure that connected apps required for business operations are properly installed and allowlisted rather than relying on this permission for end-user access.
  4. Reconcile and update the system of record to ensure complete and accurate inventory of all assignments of this permission.
Validation
  1. Enumerate all profiles, permission sets, and permission set groups that include the Approve Uninstalled Connected Apps permission using Salesforce Setup, Metadata API, Tooling API, or an automated scanner.
  2. Compare the enumerated list against the organization's designated system of record for this permission.
  3. Verify that every profile, permission set, and permission set group granting "Approve Uninstalled Connected Apps" has a corresponding entry in the system of record.
  4. Confirm that each entry includes:
    • A clear business or technical justification for requiring this permission,
    • Identification of the user role or persona (e.g., administrator, developer, integration manager),
    • Any applicable exception or approval documentation, and
    • Confirmation that the use case is limited to testing or managing connected app integrations.
  5. Verify that the permission is not assigned to end-user profiles or permission sets intended for general business users.
  6. Flag as noncompliant any authorizations lacking documentation, justification, or assigned to unauthorized user populations.
Effort: 4–8h per entityDifficulty: HardScope: entity
SBS-CPORTAL-001 · Critical
Prevent Insecure Direct Object Reference (IDOR) in Portal Apex
Failed
What's wrong

Respondent attests they cannot confirm portal Apex methods are free of parameter-based record access. This is the canonical IDOR (insecure direct object reference) vector for portals.

Why it matters

If portal-exposed Apex trusts user-controlled parameters to determine record access, external users can manipulate inputs to retrieve or modify unauthorized records. This may enable record enumeration, data exfiltration, or data corruption. IDOR vulnerabilities represent a critical authorization boundary failure.

Step-by-step fix
  • Enforce with sharing on portal-facing classes by default.
  • Scope queries using the running user's context where possible.
  • If record IDs are accepted as parameters, verify access using sharing or UserRecordAccess before returning or modifying data.
  • Remove user-controlled query structure and whitelist allowable filter inputs.
  • Enforce CRUD and FLS on all returned or modified records.
Validation
  1. Identify all Apex classes exposing @AuraEnabled, @InvocableMethod, or @RestResource methods accessible to portal users.

  2. Review method parameters of type Id, String, collections, or maps that could influence record access.

  3. Verify that:

    • Classes run with sharing or implement equivalent authorization checks.
    • Record access is validated before query or DML.
    • CRUD and FLS are enforced.
    • Dynamic SOQL does not incorporate unsanitized user input.
  4. Attempt to access unauthorized records by manipulating record IDs or filter inputs from a portal user session.

  5. Flag any method that relies solely on user-supplied parameters to control record access as noncompliant.

Effort: 4–8h per entityDifficulty: HardScope: entity
SBS-CPORTAL-002 · Critical
Restrict Guest User Record Access
Failed
What's wrong

Respondent attests guest users are NOT properly restricted to login/signup-only access. Guest user breaches are the most public class of Salesforce data leaks.

Why it matters

Guest users represent the highest-risk trust boundary in Salesforce portals—they are unauthenticated, have zero accountability, generate minimal audit trail, and operate with potential adversarial intent. When guest users are granted object permissions or can invoke custom Apex methods, attackers can systematically enumerate organizational data without even creating an account. Historical Salesforce security updates have repeatedly addressed guest user permission defaults because vendors consistently misconfigure this boundary. A single guest-accessible method that queries user records, cases, accounts, or custom objects creates a public API for data exfiltration accessible to anyone on the internet. This constitutes a Critical boundary violation: unauthenticated attackers access organizational data with no authentication required.

Step-by-step fix
  1. Remove all object-level permissions from guest user profiles except those explicitly required for authentication flows.
  2. Audit and remove guest user access to any custom Apex methods that query or return organizational data.
  3. For public data requirements (knowledge articles, case submission), implement service layer pattern:
    @AuraEnabled
    public static List<Knowledge__kav> getPublicArticles() {
        if (UserInfo.getUserType() == 'Guest') {
            // Allowlist-based, no parameters accepted
            return [SELECT Id, Title, Summary FROM Knowledge__kav 
                    WHERE PublicationStatus = 'Online' 
                    AND IsVisibleInPkb = true 
                    LIMIT 10];
        }
        throw new AuraHandledException('Access denied');
    }
    
  4. Implement network-level rate limiting and CAPTCHA for guest-accessible endpoints.
  5. Review Salesforce security updates and apply guest user permission restrictions from recent releases.
Validation
  1. Identify all guest user profiles used by customer portal sites (typically named "Site Guest User" or similar).
  2. Review object-level permissions for guest user profiles and verify that all business-related standard and custom objects have Read, Create, Edit, Delete permissions set to disabled.
  3. Enumerate all custom Apex classes containing @AuraEnabled methods and verify that none are accessible to guest users (either by checking profile permissions or testing invocation from guest context).
  4. For any guest-accessible functionality beyond authentication flows, verify implementation of service layer architecture with explicit access controls.
  5. Test by accessing the portal without authentication and attempting to invoke Apex methods or query objects via built-in Lightning controllers.
  6. Flag any guest user object permissions or method access as noncompliant.
Effort: ~1hDifficulty: EasyScope: org
SBS-CPORTAL-004 · Critical
Prevent Parameter-Based Record Access in Portal-Exposed Flows
Failed
What's wrong

Respondent attests portal-exposed Flows accept user-supplied input variables that determine which records are accessed. This is an IDOR vulnerability — Autolaunched Flows run in system context by default.

Why it matters

Flows accepting user-controlled input variables for record access create IDOR vulnerabilities allowing external users to access any record in the org. Because Autolaunched Flows run in system context without sharing by default, a single flow accepting a record ID input parameter bypasses all permissions and sharing rules. This constitutes a Critical boundary violation: unauthorized users access data they should never see, with no compensating controls required to fail.

Step-by-step fix
  1. Refactor flows to eliminate input variables controlling record access.
  2. Derive accessible records from authenticated user context (e.g., $User.Id, $User.ContactId, $User.AccountId).
  3. Configure flows to run in user context ("With Sharing") where available.
Validation
  1. Using the inventory from SBS-CPORTAL-003, identify all portal-exposed Autolaunched Flows.
  2. For each flow, examine input variables for types that could contain record IDs (Text, Record, Text Collection).
  3. Review flow logic to determine if input variables influence Get Records, Update Records, or Delete Records elements.
  4. Flag any flow accepting user-supplied input variables that control record access as noncompliant.
Effort: 4–8h per entityDifficulty: HardScope: entity
SBS-ACS-004 · High
Documented Justification for All Super Admin–Equivalent Users
Failed
What's wrong

Respondent attests they do NOT have documented justification for super-admin-equivalent users.

Why it matters

Without documented justification for Super Admin–equivalent users, organizations lose visibility into who possesses unrestricted access to the entire Salesforce environment. These users can read and modify all data, manage user accounts, and alter the security posture of the org without oversight. Undocumented Super Admin access prevents security teams from assessing breach impact, investigating administrative actions, or maintaining accountability for the most sensitive operations. The inability to identify and justify these users also prevents effective access reviews and creates persistent exposure from forgotten or orphaned administrative accounts.

Step-by-step fix
  1. Remove one or more of the Super Admin–equivalent permissions from any user who does not have a documented business or technical justification.
  2. For users who legitimately require this level of access, add or update rationale within the system of record.
  3. Reassess user access to ensure alignment with least privilege, reducing broad permissions where narrower privileges are sufficient.
Validation
  1. Enumerate all users who simultaneously possess the following permissions through any profile, permission set, or permission set group:
    • View All Data
    • Modify All Data
    • Manage Users
  2. Compile a list of all users meeting the criteria for Super Admin–equivalent access.
  3. Compare the list against the organization’s system of record.
  4. Verify that each Super Admin–equivalent user has corresponding documentation that includes:
    • A clear business or technical justification for requiring this level of access, and
    • Any relevant exception or approval records.
  5. Flag as noncompliant any users with Super Admin–equivalent access lacking documentation or justification.
Effort: 4–8h per entityDifficulty: HardScope: entity
SBS-CPORTAL-003 · High
Inventory Portal-Exposed Apex Classes and Flows
Failed
What's wrong

Respondent attests no inventory of portal-exposed Apex classes and Flows exists. External attack surface cannot be assessed; security testing has no authoritative scope.

Why it matters

Without a complete inventory of portal-exposed components, organizations cannot assess their external attack surface or enforce security reviews for externally accessible code. Security teams lose visibility into which business logic external users can invoke, preventing effective security testing, incident response, and access governance. This impairs the ability to detect unauthorized exposure of sensitive functionality or identify components requiring security hardening.

Step-by-step fix
  1. Enumerate all Apex classes containing @AuraEnabled methods.
  2. Enumerate all Autolaunched Flows embedded in Experience Cloud sites.
  3. For each component, document which portal user profiles and permission sets have access.
  4. Store the inventory in the designated system of record.
  5. Establish a process to update the inventory when new components are exposed to portals.
Validation
  1. Request the organization's inventory of portal-exposed Apex classes and Flows from the designated system of record.
  2. Query all Apex classes with @AuraEnabled methods accessible to portal user profiles.
  3. Query all Autolaunched Flows invoked from Experience Cloud pages or components.
  4. Verify each component appears in the inventory with documentation of which portal profiles can access it.
  5. Flag any portal-exposed component missing from the inventory as noncompliant.
Effort: ~1hDifficulty: EasyScope: org
SBS-INT-004 · High
Retain API Total Usage Event Logs for 30 Days
Failed
What's wrong

Respondent attests they retain less than 30 days of `ApiTotalUsage` event logs. Without sufficient retention, anomalous API behavior is invisible after the fact.

Why it matters

Without retained API Total Usage logs, organizations lose visibility into REST, SOAP, and Bulk API activity—including user identity, connected app, client IP, resource accessed, and status codes. This materially degrades the ability to detect anomalous API behavior, investigate security incidents, attribute unauthorized access, and determine the scope of potential breaches. The absence of this visibility creates a significant gap in incident detection and response capabilities.

Step-by-step fix
  1. If the organization has only 1-day ApiTotalUsage EventLogFile availability in Salesforce, implement an automated daily export that downloads newly available ApiTotalUsage log files and stores them externally for at least 30 days.
  2. If the organization uses Salesforce-native retention, ensure the configured retention period for Event Log Files is not less than 30 days.
  3. Restrict access to the retained logs (Salesforce-native or external) to authorized personnel and designated service identities.
Validation
  1. Determine whether the organization relies on Salesforce-native retention (Event Monitoring/Shield/Event Monitoring add-on) or an external log store as the system of record for ApiTotalUsage EventLogFile data.
  2. If the organization relies on Salesforce-native retention, verify that EventLogFile data is retained for at least 30 days (for example, confirm the org is entitled to and configured for Event Log File retention that is at least 30 days and can retrieve ApiTotalUsage EventLogFile data within the preceding 30-day window).
  3. If the organization relies on an external log store (including all orgs with only 1-day ApiTotalUsage availability in Salesforce):
  • Verify an automated process exists that retrieves EventLogFile entries where EventType='ApiTotalUsage' and downloads the associated log files at least once every 24 hours.
  • Inspect job schedules/run history and confirm successful executions covering at least the last 30 days (no missed days).
  • From the external log store, retrieve ApiTotalUsage logs for (a) the oldest day in the preceding 30-day window and (b) the most recent day, and confirm both are accessible and attributable to the organization.
  • Verify access to the external log store is restricted to authorized roles and service identities responsible for monitoring and investigations.
Effort: ~6hDifficulty: MediumScope: mechanism
SBS-MON-003 · High
Monitor for Suspicious Logins
Failed
What's wrong

Respondent attests no continuous suspicious-login monitoring exists. Compromised credentials provide an undetected foothold; attacker dwell time grows until the breach is discovered another way.

Why it matters

Failure to continuously monitor and rapidly respond to suspicious login activity creates an immediate and severe risk of a full system compromise. This control's absence exposes the organization to three critical, cascading risks:

  1. The primary risk is that a successful suspicious login is the initial foothold for a sophisticated attack. Without automated, anomaly-based alerting, the attacker's presence goes unnoticed, leading to a prolonged "dwell time" (the period between compromise and detection). A longer dwell time allows the attacker to extensively reconnoiter the environment, escalate privileges, and establish persistence. The longer the intruders have to prepare, the harder it is to evict them from the Org.
  2. Compromised credentials grant attackers direct access to sensitive customer data, intellectual property, and internal records. The lack of timely detection enables the attacker to perform mass data exfiltration via API, reporting, or export functionality before the account can be disabled, leading to financial, regulatory, and reputational damage.
  3. An attacker with elevated access can modify or delete critical configuration settings, business-critical data, or code, leading to system downtime, operational failures, and a loss of data integrity.
Step-by-step fix
  1. Take into use a dedicated Security Information and Event Management (SIEM) or a specialized login anomaly detection platform. If a platform is already in place, ensure all Salesforce authentication and event logs, particularly LoginHistory and relevant Event Monitoring logs, are being successfully exported, ingested, and correctly parsed.
  2. Configure the monitoring solution to explicitly analyze login events from all sources, including:
  • Human Users: Both Single Sign-On (SSO) and direct Salesforce credentials.
  • Non-Human/Integration Accounts: Any connected systems, APIs, or automated processes.
  1. Periodically review and test the effectiveness of the deployed detection rules, minimizing false positives and updating baselines as organizational travel patterns or integration accounts change.
  2. Define clear, high-priority, and time-bound Standard Operating Procedures (SOPs) to investigate and respond to all generated suspicious login alerts.
Validation
  1. Verify the organization utilizes a continuous analytics solution that looks for anomalies in Salesforce login data
  2. Confirm that the analytics solution's monitoring scope includes all user logins and all non-human integration or application user logins, regardless of whether the authentication method is Single Sign-On (SSO) or direct Salesforce login.
  3. Confirm that there is a documented, mandatory procedure to periodically (e.g. quarterly) review, test, and update the login anomaly detection rules and baselines to ensure they remain effective and minimize false positives. Review should also include an analysis of investigated suspicious login alerts during the review period – no investigations can mean the rules are not firing.
  4. Review the process and history of investigating and responding to suspicious login alerts.
Effort: ~6hDifficulty: MediumScope: mechanism
SBS-MON-004 · High
Monitor for Suspicious API Activity
Failed
What's wrong

Respondent attests no continuous API anomaly monitoring exists. Post-authentication attacks via stolen tokens or compromised integrations operate without detection.

Why it matters

Failure to monitor API activity creates a critical blind spot for post-authentication attacks, leading to severe and cascading consequences:

  1. The primary risk is that a compromised application, integration account, or stolen session token can be used to perform mass, high-speed data exfiltration.
  2. Attackers can leverage API access to perform unauthorized and undetectable manipulation of the Salesforce Org.
Step-by-step fix
  1. Deploy or build a dedicated analytics solution for granular API log analysis. Ensure all relevant Salesforce Event Monitoring logs for API activity are being successfully exported, ingested, and correctly parsed into the monitoring platform.
  2. Configure the analytics solution to first establish a baseline of normal API behavior for every user and integration account. Following the baseline period, configure and tune high-severity anomaly detection rules to detect deviations, specifically rules that:
  • Alert on an anomalous spike in data retrieval (read/query) volume.
  • Monitor for CRUD (Create, Read, Update, Delete) operations on objects or security settings outside of an entity's established historical interaction baseline.
  • Flag a change from read-heavy activity to an unusual volume of write or delete operations.
  • Alert on API calls originating from a suspicious or atypical geolocation/IP address for the accessing entity.
  1. Define clear, high-priority, and time-bound Standard Operating Procedures (SOPs) to investigate and respond to all generated API anomaly alerts. The procedure must include immediate steps to lock or deactivate the compromised user or application account and revoke stolen access tokens to prevent further data loss or system manipulation.
  2. Establish a mandatory, recurring (e.g. quarterly) review process to:
  • Periodically test the effectiveness of the deployed detection rules using simulated incidents (e.g. table-top exercises).
  • Review and update the established baselines as new integrations are deployed or organizational usage patterns change.
Validation
  1. Verify the organization utilizes a continuous analytics solution (e.g. SIEM, log aggregation platform) that is verifiably integrated with and ingesting all required API activity data, specifically the detailed Event Monitoring logs from Salesforce.
  2. Confirm that the monitoring scope includes all API access, including user-initiated, non-human/integration accounts, and all API methods (e.g. REST, SOAP, Bulk). Verify the logs ingested are granular enough to track specific API calls (CRUD operations) against specific objects.
  3. Request evidence (e.g. SIEM rule configurations, simulated alert outputs) to verify that the following specific, high-severity anomaly detection rules are active and tuned:
  • Mass Data Exfiltration: Alerts on an anomalous spike in data retrieval volume for a specific user or application, especially for sensitive objects.
  • Unauthorized Object Access: Alerts on a user or integration account attempting CRUD operations on objects they have no historical precedent for interacting with (e.g. Sales user accessing Security Settings).
  • Suspicious Action Shift: Alerts on a shift from a baseline of read-only operations to a high volume of write or delete operations by a user or application.
  • Suspicious Network Origins: Alerts on API calls originating from a suspicious or high-risk geolocation/IP address that is unusual for the accessing entity.
  1. Process Review - Triage & Response: Examine the documented procedures for triage, investigation, and response to API anomaly alerts. Review a sample of anomaly findings and the actions taken on them for e.g. the past 6 months, to confirm that alerts are being acted upon within the defined parameter.
Effort: ~6hDifficulty: MediumScope: mechanism
SBS-AUTH-003 · Moderate
Prohibit Broad or Unrestricted Profile Login IP Ranges
Failed
What's wrong

Respondent attests at least one profile has an unrestricted login IP range (e.g., `0.0.0.0/0`) that defeats the purpose of IP allow-listing.

Why it matters

Overly broad login IP ranges effectively disable network-based access controls, allowing authentication from any location on the internet. However, exploitation requires credentials to be compromised first—this control provides defense-in-depth rather than establishing a primary security boundary. When authentication controls (SBS-AUTH-001) are enforced, IP restrictions serve as an additional layer that limits the blast radius of credential compromise.

Step-by-step fix
  1. Remove any profile login IP ranges that effectively grant unrestricted global access.
  2. Replace them with IP ranges that correspond to approved corporate networks, office locations, VPN ingress points, or other authorized infrastructure.
  3. Validate that updated network restrictions do not block legitimate access paths and that users can authenticate through sanctioned networks.
  4. Establish an internal governance process to review and approve all future additions of profile login IP ranges.
Validation
  1. Retrieve all profile login IP ranges via Setup → Profiles → Login IP Ranges or by querying the Profile metadata (loginIpRanges field) using the Metadata API.
  2. For each profile, enumerate all configured login IP ranges.
  3. Identify any ranges that:
    • Cover the entire IPv4 space, or
    • Represent effectively unrestricted access (e.g., 0.0.0.0–255.255.255.255, 1.1.1.1–255.255.255.255, or similar patterns).
  4. Confirm that all IP ranges align with organizational security policy and defined network boundaries.
  5. Flag any profile with an impermissible or overly broad range.
  6. Download API Total Usage logs (EventLogFile - ApiTotalUsage, available in free tier of Event Monitoring) to validate IP restrictions are effective:
    • Extract unique CLIENT_IP values from recent API activity.
    • Compare against documented approved organizational network ranges.
    • Identify any new or unexpected IP addresses making API calls.
    • Cross-reference unusual IPs with profile assignments to identify potential policy gaps.
Effort: 4–8h per entityDifficulty: HardScope: entity
SBS-DATA-002 · Moderate
Maintain an Inventory of Long Text Area Fields Containing Regulated Data
Failed
What's wrong

Respondent attests they do NOT maintain an inventory of Long Text Area fields containing regulated data. Without it, controls cannot be applied to the right fields.

Why it matters

Without a current inventory of fields containing regulated data, organizations cannot systematically apply appropriate protection, retention, or access controls to sensitive data locations—and may be unable to fulfill privacy obligations such as GDPR's Right to Erasure or CCPA deletion requests that require knowing all locations where personal data is stored. During audits or breach investigations, the absence of a maintained inventory delays response times and may result in incomplete remediation or missed data locations.

Step-by-step fix
  1. Generate an inventory using scan results, administrative review, and metadata analysis.
  2. Document all LTA fields containing regulated data and classify the associated data types.
  3. Establish a recurring review cycle to update the inventory.
  4. Integrate the inventory into governance functions such as retention, DLP, access reviews, and breach response planning.
Validation
  1. Obtain the organization's documented inventory of Long Text Area fields containing regulated data.
  2. Compare the inventory against Salesforce metadata to confirm all relevant fields are included.
  3. Review scan results or administrative evidence demonstrating how fields were identified.
  4. Verify that the inventory includes object name, field API name, data classification, and last review date.
  5. Determine whether the inventory is maintained and current; missing, outdated, or incomplete inventories indicate noncompliance.
Effort: ~3hDifficulty: MediumScope: inventory
SBS-FILE-003 · Moderate
Periodic Review and Cleanup of Public Content Links
Failed
What's wrong

Respondent attests no recurring review of active Public Content links exists. Legacy and accidentally-shared links accumulate as undetected, persistent exposure.

Why it matters

Without periodic review, Public Content links accumulate over time—including legacy links created before security policies were established, links that have outlived their business purpose, and links created through accidental or unauthorized sharing. These forgotten links represent persistent exposure that may go undetected indefinitely. While this control does not prevent initial link creation issues, it provides a governance mechanism to identify and remediate accumulated risk, supporting defense-in-depth and reducing the organization's overall exposure footprint.

Step-by-step fix
  1. Establish a documented process for periodic review of all ContentDistribution records.
  2. Define a review cadence appropriate to organizational risk tolerance (quarterly recommended as a baseline).
  3. Create a scanning mechanism (script, report, or tool) to enumerate all active Public Content links.
  4. Define review criteria to identify links requiring remediation: missing expiry dates, missing passwords on sensitive content, links older than a defined threshold, or links to content no longer requiring external sharing.
  5. Assign ownership for the review process and remediation actions.
  6. Maintain records of each review cycle for audit purposes.
Validation
  1. Verify the organization has a documented process for periodic Public Content link review.
  2. Confirm the review cadence is defined (e.g., quarterly, monthly) and appropriate for the organization's risk profile.
  3. Obtain evidence of recent review execution (e.g., scan results, remediation records, review meeting notes).
  4. Verify that reviews include all active ContentDistribution records.
  5. Confirm that identified issues are tracked through remediation or deletion.
  6. Flag organizations without a documented review process or evidence of recent execution.
Effort: ~1hDifficulty: EasyScope: org
SBS-INT-003 · Moderate
Inventory and Justification of Named Credentials
Failed
What's wrong

Respondent attests they do NOT maintain an inventory + justification list for Named Credentials. Forgotten Named Credentials with valid stored secrets are a hidden third-party access surface.

Why it matters

Without a documented inventory and justification for Named Credentials, undocumented or unjustified configurations may expose the organization to data leakage, unauthorized integrations, or reliance on insecure or untrusted endpoints. However, this control provides governance documentation rather than detection or prevention capability: it supports audit readiness and informed decision-making about authenticated external connections, but other controls are required to detect or prevent actual misuse of approved credentials.

Step-by-step fix
  1. Add any undocumented Named Credentials to the system of record.
  2. Document a valid business justification for each Named Credential.
  3. Remove, disable, or reconfigure any Named Credentials that cannot be justified or that reference untrusted endpoints.
  4. Establish a recurring reconciliation process to ensure Named Credentials remain fully inventoried and justified.
Validation
  1. Enumerate all Named Credentials using Salesforce Setup, Metadata API, Tooling API, or Connect REST API.
  2. Retrieve the organization’s system of record for approved external endpoints and integration credentials.
  3. Compare the Salesforce list to the system of record to confirm all Named Credentials are documented.
  4. Verify that each documented Named Credential includes:
    • The external endpoint URL
    • The authentication type (named principal or per-user)
    • The business justification for the integration
  5. Flag any Named Credentials missing from the inventory or lacking justification as noncompliant.
Effort: 4–8h per entityDifficulty: HardScope: entity
HelloMavens

Your roadmap is clear. Implementation is where most security efforts stall.

Our cofounders led product and engineering teams at GrubHub and Rocket, and led all of product and engineering at National Debt Relief — and have built and audited Salesforce environments at every scale, from Series A startups to Fortune 500 scale enterprises.

  • Walk through your top remediation priorities together.
  • Get a fixed-scope plan to close the highest-impact gaps first.
  • Optional: bring us in to ship the fixes and keep them from regressing.
Book a 30-minute remediation review

No obligation. We'll review your top three findings with you and tell you whether HelloMavens is the right fit.

Appendix

Methodology, disclaimer, and glossary

Methodology

Each control was evaluated against the Security Benchmark for Salesforce vd4304e18e6f3747b04b1a7097b3d03a6036e5a3f. Controls scored as pass / fail / inconclusive / not applicable. Category scores are weighted by risk tier (Critical 5, High 3, Moderate 2). Overall score is a weighted average across categories proportional to each category's share of Critical-tier and High-tier controls. Inconclusive and not-applicable controls are excluded from denominators. If any Critical-tier control failed, the overall grade is capped at C regardless of other scores.

Engine version 0.0.0-alpha.10. SBS vd4304e18e6f3747b04b1a7097b3d03a6036e5a3f. Disclaimer version 2026-05-02-placeholder-1.

Disclaimer

The HelloMavens Salesforce Security Audit produces a directional security assessment based on questionnaire responses you provide.

This report is not a substitute for a formal security audit, penetration test, or compliance certification.

HelloMavens LLC makes no warranty, express or implied, regarding the completeness, accuracy, or fitness for any particular purpose of this report.

You confirm that you are authorized to submit this information about your organization.

HelloMavens LLC will process your responses solely to generate this report and will not retain raw scan data after report generation. Aggregate, anonymized scoring data may be retained for benchmarking.

Any remediation actions you take based on this report are at your own risk and discretion.

Glossary
SBS
Security Benchmark for Salesforce — an open standard of audit-ready controls maintained at github.com/Salesforce-Security-Benchmark.
OWASP Top 10
A standard awareness document for developers and web application security maintained by the Open Web Application Security Project. The 2021 edition is referenced throughout this report.
HIPAA Security Rule
U.S. federal regulation governing security standards for protected health information.
SOC 2
Service Organization Control 2 — an audit framework focused on the AICPA Trust Services Criteria (security, availability, processing integrity, confidentiality, privacy).
Inconclusive
A control that could not be evaluated from the evidence provided (e.g. you answered "I don't know"). Inconclusive controls are excluded from scoring denominators per spec §8.
Critical-fail cap
If any Critical-tier control fails, the overall risk grade is capped at C regardless of other scores. Highlights catastrophic risk areas.
Consultant scan
An evidence-based audit run by a security consultant against your Salesforce org via the upcoming sbs-scan CLI. Resolves inconclusive controls with high-confidence findings.
Where you're already strong

What these passing controls protect against, in plain language.

Access controls

  • SBS-ACS-006 Documented Justification for `Use Any API Client` Permission

    This permission, which bypasses API Access Control entirely, is granted only to justified users, so external apps can't get data access without your vetting and allowlisting first.

  • SBS-ACS-001 Enforce a Documented Permission Set Model

    Your permission set model is documented and enforced, so privilege sprawl can't accumulate quietly — every authorization construct conforms to the standard, and access reviews actually have something to compare against.

  • SBS-ACS-002 Documented Justification for All `API-Enabled` Authorizations

    Your API-enabled authorizations are tracked with documented justification, so you can see who has programmatic data access at a glance and prove every permission is intentional during audit reviews.

  • SBS-ACS-005 Only Use Custom Profiles for Active Users

    Regular users sit on custom profiles, not Salesforce-managed standard ones, so platform updates can't enable new permissions without your approval and least privilege is actually enforceable.

  • SBS-ACS-007 Maintain Inventory of Non-Human Identities

    You have a current inventory of non-human identities, so automated access can be reviewed, orphaned credentials can be retired, and security incidents involving integrations are actually investigable.

  • SBS-ACS-008 Restrict Broad Privileges for Non-Human Identities

    Non-human identities aren't carrying broad privileges by default, so a compromised API key or OAuth token can't bypass sharing rules or trigger administrative operations org-wide.

  • SBS-ACS-011 Enforce Governance of Access and Authorization Changes

    Access changes go through governed approval and audit, so excessive permissions can't slip in unnoticed and incident investigations have a paper trail to follow.

  • SBS-ACS-009 Implement Compensating Controls for Privileged Non-Human Identities

    Privileged non-human identities have compensating controls layered on top of credentials, so secret leakage isn't a single point of failure for your most sensitive integrations.

  • SBS-ACS-010 Enforce Periodic Access Review and Recertification

    Access is reviewed and recertified on a cadence, so privilege creep can't quietly accumulate from old roles, temporary projects, and forgotten grants — least privilege stays current over time.

  • SBS-ACS-012 Classify Users for Login Hours Restrictions

    Privileged accounts are classified for login-hours restrictions, adding a defense-in-depth layer — so even a compromised credential can't be exploited freely during off-hours when detection is thin.

Authentication

  • SBS-AUTH-001 Enable Organization-Wide SSO Enforcement Setting

    SSO enforcement is on org-wide, so users can't quietly authenticate around centralized identity management — credential-based attacks lose their direct path into your org.

  • SBS-AUTH-004 Enforce Strong Multi-Factor Authentication for External Users with Substantial Access to Sensitive Data

    MFA is enforced for external users with substantial data access, so a stolen password alone can't sign someone in — phishing and credential-stuffing lose their main path to your sensitive data.

  • SBS-AUTH-002 Govern and Document All Users Permitted to Bypass Single Sign-On

    SSO bypass exceptions are documented and approved, so off-SSO accounts can't proliferate unnoticed and audit reviews have clean visibility into every authentication exception.

Code security

  • SBS-CODE-004 Prevent Sensitive Data in Application Logs

    Sensitive data is kept out of application logs, so a low-privilege account with log access can't extract credentials or PII without tripping the access controls on the source objects.

Data protection

  • SBS-DATA-001 Implement Mechanisms to Detect Regulated Data in Long Text Area Fields

    You can detect regulated data in long-text fields, so PII can't accumulate in unknown locations — GDPR Right-to-Erasure and CCPA deletion requests are actually executable when they arrive.

  • SBS-DATA-003 Maintain Tested Backup and Recovery for Salesforce Data and Metadata

    Backups are tested, not just configured, so accidental deletion, malicious destruction, or configuration corruption can be recovered from with confidence rather than hope.

  • SBS-DATA-004 Require Field History Tracking for Sensitive Fields

    Field history tracking is on for sensitive fields, so unauthorized or accidental changes are detectable, investigatable, and accountable rather than silent.

Deployments

  • SBS-DEP-005 Implement Secret Scanning for Salesforce Source Repositories

    Secret scanning runs on your repos, so hardcoded credentials get caught before they're committed — and the supply-chain path through repository access stays closed to outsiders.

  • SBS-DEP-001 Require a Designated Deployment Identity for Metadata Changes

    Metadata changes go through a designated deployment identity, so production changes are attributable and unauthorized direct edits stand out instead of blending into routine admin activity.

  • SBS-DEP-002 Establish and Maintain a List of High-Risk Metadata Types Prohibited from Direct Production Editing

    You've defined which metadata types are high-risk, so deployment governance has clear boundaries — Apex, auth settings, and outbound connectivity can't be edited directly in production.

  • SBS-DEP-003 Monitor and Alert on Unauthorized Modifications to High-Risk Metadata

    Unauthorized changes to high-risk metadata trigger alerts, so malicious or accidental drift gets surfaced quickly instead of persisting for weeks before being noticed.

  • SBS-DEP-004 Establish Source-Driven Development Process

    Production configuration ties back to a source-controlled approval trail, so 'what changed, when, by whom' is answerable — incident investigation has a real starting point instead of guesswork.

  • SBS-DEP-006 Configure Salesforce CLI Connected App with Token Expiration Policies

    Your CLI Connected App has token expiration policies, so a stolen laptop's token files don't grant indefinite access to production — the credential exposure window is bounded.

Foundations

  • SBS-FDNS-001 Centralized Security System of Record

    Your Salesforce security configurations, exceptions, justifications, and SBS-required inventories live in a centralized system of record, so governance survives staff turnover and audit conclusions can actually be reconstructed from durable artifacts.

File and content sharing

  • SBS-FILE-002 Require Passwords on Public Content Links for Sensitive Content

    Sensitive Public Content links are password-protected, so an intercepted or accidentally-shared URL alone isn't enough to view the content — there's an authentication layer between link possession and data exposure.

  • SBS-FILE-001 Require Expiry Dates on Public Content Links

    Public Content links carry expiry dates appropriate to the sensitivity of what they share, so a leaked or forgotten link doesn't extend exposure indefinitely — time-bounded access is governance you can actually enforce.

Integrations

  • SBS-INT-001 Enforce Governance of Browser Extensions Accessing Salesforce

    Browser extensions accessing Salesforce go through governance, so cloned or malicious extensions can't quietly harvest session tokens or exfiltrate data from authenticated users.

  • SBS-INT-002 Inventory and Justification of Remote Site Settings

    Your Remote Site Settings inventory is documented with justification, so unvetted endpoints can't be authorized for Apex callouts and you know exactly which external services your code talks to.

Monitoring and detection

  • SBS-MON-001 Enable Event Monitoring Log Storage

    Event Monitoring log storage is enabled for the event types your security policy requires, so investigations have the telemetry to reconstruct what happened — file downloads, API calls, report exports, all retained instead of silently dropped.

  • SBS-MON-002 Retaining Event Logs

    Event logs persist for your required retention period, exported when Salesforce native retention falls short and protected from administrative deletion, so a slow-burn incident can still be reconstructed weeks or months after the fact.

Connected apps & OAuth

  • SBS-OAUTH-001 Require Formal Installation of Connected Apps

    Connected Apps go through formal installation, so security configuration — refresh token lifetimes, session policies, IP restrictions — is enforced by you, not inherited from the external developer.

  • SBS-OAUTH-002 Require Profile or Permission Set Access Control for Connected Apps

    Connected Apps require explicit profile or permission set access, so a Connected App can't quietly authenticate any user — least-privilege actually applies to OAuth sessions.

  • SBS-OAUTH-003 Add Criticality Classification of OAuth-Enabled Connected Apps

    OAuth-enabled Connected Apps are classified by criticality, so risk assessment is anchored — you know which integrations touch sensitive data and where to focus governance and monitoring.

  • SBS-OAUTH-004 Due Diligence Documentation for High-Risk Connected App Vendors

    High-risk Connected App vendors have documented due diligence on file, so onboarding decisions are informed and contractual obligations match the access being granted.

Security configuration

  • SBS-SECCONF-001 Establish a Salesforce Health Check Baseline

    You have a Health Check baseline, so configuration drift is actually detectable — you can tell when settings diverge from intentional decisions versus accumulated neglect.

  • SBS-SECCONF-002 Review and Remediate Salesforce Health Check Deviations

    Health Check deviations get reviewed and remediated, so configuration drift doesn't quietly weaken your security posture between audits — your baseline stays meaningful over time.

What we couldn't determine

For these controls we don't have enough evidence to call a pass or a fail. Each one has a path to a definitive answer below.

  • SBS-CODE-001Mandatory Peer Review for Salesforce Code Changes

    You answered "I don't know" to: "Does every Apex or Lightning code change get peer-reviewed and approved before it goes to production?". A consultant scan would resolve this with evidence-based scoring.

  • SBS-CODE-002Pre-Merge Static Code Analysis for Apex and LWC

    You answered "I don't know" to: "Is there an automated security scanner (e.g., Salesforce Code Analyzer, PMD) that checks every code change BEFORE it gets merged?". A consultant scan would resolve this with evidence-based scoring.

  • SBS-CODE-003Implement Persistent Apex Application Logging

    You answered "I don't know" to: "Do you have an Apex logging framework that writes events to a permanent place (not just the temporary "debug log")?". A consultant scan would resolve this with evidence-based scoring.

  • SBS-CPORTAL-005Conduct Penetration Testing for Portal Security

    You answered "I don't know" to: "Have you had a security professional run penetration tests against your portal — both before launch and on a regular schedule since?". A consultant scan would resolve this with evidence-based scoring.

  • SBS-MON-005Monitor API Usage Against Limits

    You answered "I don't know" to: "Do you continuously track how close you are to your daily Salesforce API limit, with proactive alerts at a defined utilization threshold (e.g., 80-90%) before you hit the cap?". A consultant scan would resolve this with evidence-based scoring.