by HelloMavens
Salesforce Security Review
A security assessment of your Salesforce environment against the Security Benchmark for Salesforce (SBS).
Section 1
Executive summary
A one-page read on where you stand.
See the Appendix for what the passes mean and why some controls weren't applicable →
Risk tier breakdown
Top critical findings
- SBS-ACS-003 — Respondent attests the `Approve Uninstalled Connected Apps` permission is NOT properly restricted with documented justification per holder. End-users with this permission can self-approve OAuth grants.
- SBS-CPORTAL-001 — Respondent attests they cannot confirm portal Apex methods are free of parameter-based record access. This is the canonical IDOR (insecure direct object reference) vector for portals.
- SBS-CPORTAL-002 — Respondent attests guest users are NOT properly restricted to login/signup-only access. Guest user breaches are the most public class of Salesforce data leaks.
Risk by business impact
- Data breach exposure: 4 access / authentication / data-protection controls failed.
- Compliance gap: 3 categories scored below 65 / 100 — likely weak spots in audit conversations.
- Operational risk: 9.3% of controls couldn't be evaluated from your responses. A consultant scan would resolve them with evidence-based scoring.
Section 2
Category scorecards
One card per SBS category. Each shows the 0–100 score plus the OWASP and regulation citations relevant to that category.
Section 3
Remediation detail
Every failed control with what to fix, why it matters, how to fix it, and how to verify the fix worked.
Respondent attests the `Approve Uninstalled Connected Apps` permission is NOT properly restricted with documented justification per holder. End-users with this permission can self-approve OAuth grants.
The Approve Uninstalled Connected Apps permission allows users to bypass Connected App usage restrictions and self-authorize any OAuth application without administrator approval. This establishes an uncontrolled security boundary: users with this permission can grant external applications access to Salesforce data without oversight, enabling data exfiltration, unauthorized integrations, and potential account compromise. Unlike other permissions that require additional failures to exploit, this permission directly enables unauthorized third-party access the moment it is misassigned—making it a primary security boundary that must be tightly controlled.
- Remove the
Approve Uninstalled Connected Appspermission from any profile, permission set, or permission set group that lacks a documented justification or is assigned to end-users. - For any authorization that legitimately requires this permission (e.g., administrators or developers testing connected apps), add or update the rationale in the system of record to clearly justify the need and identify the specific role or use case.
- Ensure that connected apps required for business operations are properly installed and allowlisted rather than relying on this permission for end-user access.
- Reconcile and update the system of record to ensure complete and accurate inventory of all assignments of this permission.
- Enumerate all profiles, permission sets, and permission set groups that include the
Approve Uninstalled Connected Appspermission using Salesforce Setup, Metadata API, Tooling API, or an automated scanner. - Compare the enumerated list against the organization's designated system of record for this permission.
- Verify that every profile, permission set, and permission set group granting "Approve Uninstalled Connected Apps" has a corresponding entry in the system of record.
- Confirm that each entry includes:
- A clear business or technical justification for requiring this permission,
- Identification of the user role or persona (e.g., administrator, developer, integration manager),
- Any applicable exception or approval documentation, and
- Confirmation that the use case is limited to testing or managing connected app integrations.
- Verify that the permission is not assigned to end-user profiles or permission sets intended for general business users.
- Flag as noncompliant any authorizations lacking documentation, justification, or assigned to unauthorized user populations.
Respondent attests they cannot confirm portal Apex methods are free of parameter-based record access. This is the canonical IDOR (insecure direct object reference) vector for portals.
If portal-exposed Apex trusts user-controlled parameters to determine record access, external users can manipulate inputs to retrieve or modify unauthorized records. This may enable record enumeration, data exfiltration, or data corruption. IDOR vulnerabilities represent a critical authorization boundary failure.
- Enforce
with sharingon portal-facing classes by default. - Scope queries using the running user's context where possible.
- If record IDs are accepted as parameters, verify access using sharing or
UserRecordAccessbefore returning or modifying data. - Remove user-controlled query structure and whitelist allowable filter inputs.
- Enforce CRUD and FLS on all returned or modified records.
-
Identify all Apex classes exposing
@AuraEnabled,@InvocableMethod, or@RestResourcemethods accessible to portal users. -
Review method parameters of type
Id,String, collections, or maps that could influence record access. -
Verify that:
- Classes run
with sharingor implement equivalent authorization checks. - Record access is validated before query or DML.
- CRUD and FLS are enforced.
- Dynamic SOQL does not incorporate unsanitized user input.
- Classes run
-
Attempt to access unauthorized records by manipulating record IDs or filter inputs from a portal user session.
-
Flag any method that relies solely on user-supplied parameters to control record access as noncompliant.
Respondent attests guest users are NOT properly restricted to login/signup-only access. Guest user breaches are the most public class of Salesforce data leaks.
Guest users represent the highest-risk trust boundary in Salesforce portals—they are unauthenticated, have zero accountability, generate minimal audit trail, and operate with potential adversarial intent. When guest users are granted object permissions or can invoke custom Apex methods, attackers can systematically enumerate organizational data without even creating an account. Historical Salesforce security updates have repeatedly addressed guest user permission defaults because vendors consistently misconfigure this boundary. A single guest-accessible method that queries user records, cases, accounts, or custom objects creates a public API for data exfiltration accessible to anyone on the internet. This constitutes a Critical boundary violation: unauthenticated attackers access organizational data with no authentication required.
- Remove all object-level permissions from guest user profiles except those explicitly required for authentication flows.
- Audit and remove guest user access to any custom Apex methods that query or return organizational data.
- For public data requirements (knowledge articles, case submission), implement service layer pattern:
@AuraEnabled public static List<Knowledge__kav> getPublicArticles() { if (UserInfo.getUserType() == 'Guest') { // Allowlist-based, no parameters accepted return [SELECT Id, Title, Summary FROM Knowledge__kav WHERE PublicationStatus = 'Online' AND IsVisibleInPkb = true LIMIT 10]; } throw new AuraHandledException('Access denied'); } - Implement network-level rate limiting and CAPTCHA for guest-accessible endpoints.
- Review Salesforce security updates and apply guest user permission restrictions from recent releases.
- Identify all guest user profiles used by customer portal sites (typically named "Site Guest User" or similar).
- Review object-level permissions for guest user profiles and verify that all business-related standard and custom objects have Read, Create, Edit, Delete permissions set to disabled.
- Enumerate all custom Apex classes containing
@AuraEnabledmethods and verify that none are accessible to guest users (either by checking profile permissions or testing invocation from guest context). - For any guest-accessible functionality beyond authentication flows, verify implementation of service layer architecture with explicit access controls.
- Test by accessing the portal without authentication and attempting to invoke Apex methods or query objects via built-in Lightning controllers.
- Flag any guest user object permissions or method access as noncompliant.
Respondent attests portal-exposed Flows accept user-supplied input variables that determine which records are accessed. This is an IDOR vulnerability — Autolaunched Flows run in system context by default.
Flows accepting user-controlled input variables for record access create IDOR vulnerabilities allowing external users to access any record in the org. Because Autolaunched Flows run in system context without sharing by default, a single flow accepting a record ID input parameter bypasses all permissions and sharing rules. This constitutes a Critical boundary violation: unauthorized users access data they should never see, with no compensating controls required to fail.
- Refactor flows to eliminate input variables controlling record access.
- Derive accessible records from authenticated user context (e.g.,
$User.Id,$User.ContactId,$User.AccountId). - Configure flows to run in user context ("With Sharing") where available.
- Using the inventory from SBS-CPORTAL-003, identify all portal-exposed Autolaunched Flows.
- For each flow, examine input variables for types that could contain record IDs (Text, Record, Text Collection).
- Review flow logic to determine if input variables influence Get Records, Update Records, or Delete Records elements.
- Flag any flow accepting user-supplied input variables that control record access as noncompliant.
Respondent attests they do NOT have documented justification for super-admin-equivalent users.
Without documented justification for Super Admin–equivalent users, organizations lose visibility into who possesses unrestricted access to the entire Salesforce environment. These users can read and modify all data, manage user accounts, and alter the security posture of the org without oversight. Undocumented Super Admin access prevents security teams from assessing breach impact, investigating administrative actions, or maintaining accountability for the most sensitive operations. The inability to identify and justify these users also prevents effective access reviews and creates persistent exposure from forgotten or orphaned administrative accounts.
- Remove one or more of the Super Admin–equivalent permissions from any user who does not have a documented business or technical justification.
- For users who legitimately require this level of access, add or update rationale within the system of record.
- Reassess user access to ensure alignment with least privilege, reducing broad permissions where narrower privileges are sufficient.
- Enumerate all users who simultaneously possess the following permissions through any profile, permission set, or permission set group:
View All DataModify All DataManage Users
- Compile a list of all users meeting the criteria for Super Admin–equivalent access.
- Compare the list against the organization’s system of record.
- Verify that each Super Admin–equivalent user has corresponding documentation that includes:
- A clear business or technical justification for requiring this level of access, and
- Any relevant exception or approval records.
- Flag as noncompliant any users with Super Admin–equivalent access lacking documentation or justification.
Respondent attests no inventory of portal-exposed Apex classes and Flows exists. External attack surface cannot be assessed; security testing has no authoritative scope.
Without a complete inventory of portal-exposed components, organizations cannot assess their external attack surface or enforce security reviews for externally accessible code. Security teams lose visibility into which business logic external users can invoke, preventing effective security testing, incident response, and access governance. This impairs the ability to detect unauthorized exposure of sensitive functionality or identify components requiring security hardening.
- Enumerate all Apex classes containing
@AuraEnabledmethods. - Enumerate all Autolaunched Flows embedded in Experience Cloud sites.
- For each component, document which portal user profiles and permission sets have access.
- Store the inventory in the designated system of record.
- Establish a process to update the inventory when new components are exposed to portals.
- Request the organization's inventory of portal-exposed Apex classes and Flows from the designated system of record.
- Query all Apex classes with
@AuraEnabledmethods accessible to portal user profiles. - Query all Autolaunched Flows invoked from Experience Cloud pages or components.
- Verify each component appears in the inventory with documentation of which portal profiles can access it.
- Flag any portal-exposed component missing from the inventory as noncompliant.
Respondent attests they retain less than 30 days of `ApiTotalUsage` event logs. Without sufficient retention, anomalous API behavior is invisible after the fact.
Without retained API Total Usage logs, organizations lose visibility into REST, SOAP, and Bulk API activity—including user identity, connected app, client IP, resource accessed, and status codes. This materially degrades the ability to detect anomalous API behavior, investigate security incidents, attribute unauthorized access, and determine the scope of potential breaches. The absence of this visibility creates a significant gap in incident detection and response capabilities.
- If the organization has only 1-day ApiTotalUsage EventLogFile availability in Salesforce, implement an automated daily export that downloads newly available ApiTotalUsage log files and stores them externally for at least 30 days.
- If the organization uses Salesforce-native retention, ensure the configured retention period for Event Log Files is not less than 30 days.
- Restrict access to the retained logs (Salesforce-native or external) to authorized personnel and designated service identities.
- Determine whether the organization relies on Salesforce-native retention (Event Monitoring/Shield/Event Monitoring add-on) or an external log store as the system of record for ApiTotalUsage EventLogFile data.
- If the organization relies on Salesforce-native retention, verify that EventLogFile data is retained for at least 30 days (for example, confirm the org is entitled to and configured for Event Log File retention that is at least 30 days and can retrieve ApiTotalUsage EventLogFile data within the preceding 30-day window).
- If the organization relies on an external log store (including all orgs with only 1-day ApiTotalUsage availability in Salesforce):
- Verify an automated process exists that retrieves EventLogFile entries where EventType='ApiTotalUsage' and downloads the associated log files at least once every 24 hours.
- Inspect job schedules/run history and confirm successful executions covering at least the last 30 days (no missed days).
- From the external log store, retrieve ApiTotalUsage logs for (a) the oldest day in the preceding 30-day window and (b) the most recent day, and confirm both are accessible and attributable to the organization.
- Verify access to the external log store is restricted to authorized roles and service identities responsible for monitoring and investigations.
Respondent attests no continuous suspicious-login monitoring exists. Compromised credentials provide an undetected foothold; attacker dwell time grows until the breach is discovered another way.
Failure to continuously monitor and rapidly respond to suspicious login activity creates an immediate and severe risk of a full system compromise. This control's absence exposes the organization to three critical, cascading risks:
- The primary risk is that a successful suspicious login is the initial foothold for a sophisticated attack. Without automated, anomaly-based alerting, the attacker's presence goes unnoticed, leading to a prolonged "dwell time" (the period between compromise and detection). A longer dwell time allows the attacker to extensively reconnoiter the environment, escalate privileges, and establish persistence. The longer the intruders have to prepare, the harder it is to evict them from the Org.
- Compromised credentials grant attackers direct access to sensitive customer data, intellectual property, and internal records. The lack of timely detection enables the attacker to perform mass data exfiltration via API, reporting, or export functionality before the account can be disabled, leading to financial, regulatory, and reputational damage.
- An attacker with elevated access can modify or delete critical configuration settings, business-critical data, or code, leading to system downtime, operational failures, and a loss of data integrity.
- Take into use a dedicated Security Information and Event Management (SIEM) or a specialized login anomaly detection platform. If a platform is already in place, ensure all Salesforce authentication and event logs, particularly LoginHistory and relevant Event Monitoring logs, are being successfully exported, ingested, and correctly parsed.
- Configure the monitoring solution to explicitly analyze login events from all sources, including:
- Human Users: Both Single Sign-On (SSO) and direct Salesforce credentials.
- Non-Human/Integration Accounts: Any connected systems, APIs, or automated processes.
- Periodically review and test the effectiveness of the deployed detection rules, minimizing false positives and updating baselines as organizational travel patterns or integration accounts change.
- Define clear, high-priority, and time-bound Standard Operating Procedures (SOPs) to investigate and respond to all generated suspicious login alerts.
- Verify the organization utilizes a continuous analytics solution that looks for anomalies in Salesforce login data
- Confirm that the analytics solution's monitoring scope includes all user logins and all non-human integration or application user logins, regardless of whether the authentication method is Single Sign-On (SSO) or direct Salesforce login.
- Confirm that there is a documented, mandatory procedure to periodically (e.g. quarterly) review, test, and update the login anomaly detection rules and baselines to ensure they remain effective and minimize false positives. Review should also include an analysis of investigated suspicious login alerts during the review period – no investigations can mean the rules are not firing.
- Review the process and history of investigating and responding to suspicious login alerts.
Respondent attests no continuous API anomaly monitoring exists. Post-authentication attacks via stolen tokens or compromised integrations operate without detection.
Failure to monitor API activity creates a critical blind spot for post-authentication attacks, leading to severe and cascading consequences:
- The primary risk is that a compromised application, integration account, or stolen session token can be used to perform mass, high-speed data exfiltration.
- Attackers can leverage API access to perform unauthorized and undetectable manipulation of the Salesforce Org.
- Deploy or build a dedicated analytics solution for granular API log analysis. Ensure all relevant Salesforce Event Monitoring logs for API activity are being successfully exported, ingested, and correctly parsed into the monitoring platform.
- Configure the analytics solution to first establish a baseline of normal API behavior for every user and integration account. Following the baseline period, configure and tune high-severity anomaly detection rules to detect deviations, specifically rules that:
- Alert on an anomalous spike in data retrieval (read/query) volume.
- Monitor for CRUD (Create, Read, Update, Delete) operations on objects or security settings outside of an entity's established historical interaction baseline.
- Flag a change from read-heavy activity to an unusual volume of write or delete operations.
- Alert on API calls originating from a suspicious or atypical geolocation/IP address for the accessing entity.
- Define clear, high-priority, and time-bound Standard Operating Procedures (SOPs) to investigate and respond to all generated API anomaly alerts. The procedure must include immediate steps to lock or deactivate the compromised user or application account and revoke stolen access tokens to prevent further data loss or system manipulation.
- Establish a mandatory, recurring (e.g. quarterly) review process to:
- Periodically test the effectiveness of the deployed detection rules using simulated incidents (e.g. table-top exercises).
- Review and update the established baselines as new integrations are deployed or organizational usage patterns change.
- Verify the organization utilizes a continuous analytics solution (e.g. SIEM, log aggregation platform) that is verifiably integrated with and ingesting all required API activity data, specifically the detailed Event Monitoring logs from Salesforce.
- Confirm that the monitoring scope includes all API access, including user-initiated, non-human/integration accounts, and all API methods (e.g. REST, SOAP, Bulk). Verify the logs ingested are granular enough to track specific API calls (CRUD operations) against specific objects.
- Request evidence (e.g. SIEM rule configurations, simulated alert outputs) to verify that the following specific, high-severity anomaly detection rules are active and tuned:
- Mass Data Exfiltration: Alerts on an anomalous spike in data retrieval volume for a specific user or application, especially for sensitive objects.
- Unauthorized Object Access: Alerts on a user or integration account attempting CRUD operations on objects they have no historical precedent for interacting with (e.g. Sales user accessing Security Settings).
- Suspicious Action Shift: Alerts on a shift from a baseline of read-only operations to a high volume of write or delete operations by a user or application.
- Suspicious Network Origins: Alerts on API calls originating from a suspicious or high-risk geolocation/IP address that is unusual for the accessing entity.
- Process Review - Triage & Response: Examine the documented procedures for triage, investigation, and response to API anomaly alerts. Review a sample of anomaly findings and the actions taken on them for e.g. the past 6 months, to confirm that alerts are being acted upon within the defined parameter.
Respondent attests at least one profile has an unrestricted login IP range (e.g., `0.0.0.0/0`) that defeats the purpose of IP allow-listing.
Overly broad login IP ranges effectively disable network-based access controls, allowing authentication from any location on the internet. However, exploitation requires credentials to be compromised first—this control provides defense-in-depth rather than establishing a primary security boundary. When authentication controls (SBS-AUTH-001) are enforced, IP restrictions serve as an additional layer that limits the blast radius of credential compromise.
- Remove any profile login IP ranges that effectively grant unrestricted global access.
- Replace them with IP ranges that correspond to approved corporate networks, office locations, VPN ingress points, or other authorized infrastructure.
- Validate that updated network restrictions do not block legitimate access paths and that users can authenticate through sanctioned networks.
- Establish an internal governance process to review and approve all future additions of profile login IP ranges.
- Retrieve all profile login IP ranges via Setup → Profiles → Login IP Ranges or by querying the Profile metadata (
loginIpRangesfield) using the Metadata API. - For each profile, enumerate all configured login IP ranges.
- Identify any ranges that:
- Cover the entire IPv4 space, or
- Represent effectively unrestricted access (e.g.,
0.0.0.0–255.255.255.255,1.1.1.1–255.255.255.255, or similar patterns).
- Confirm that all IP ranges align with organizational security policy and defined network boundaries.
- Flag any profile with an impermissible or overly broad range.
- Download API Total Usage logs (EventLogFile - ApiTotalUsage, available in free tier of Event Monitoring) to validate IP restrictions are effective:
- Extract unique
CLIENT_IPvalues from recent API activity. - Compare against documented approved organizational network ranges.
- Identify any new or unexpected IP addresses making API calls.
- Cross-reference unusual IPs with profile assignments to identify potential policy gaps.
- Extract unique
Respondent attests they do NOT maintain an inventory of Long Text Area fields containing regulated data. Without it, controls cannot be applied to the right fields.
Without a current inventory of fields containing regulated data, organizations cannot systematically apply appropriate protection, retention, or access controls to sensitive data locations—and may be unable to fulfill privacy obligations such as GDPR's Right to Erasure or CCPA deletion requests that require knowing all locations where personal data is stored. During audits or breach investigations, the absence of a maintained inventory delays response times and may result in incomplete remediation or missed data locations.
- Generate an inventory using scan results, administrative review, and metadata analysis.
- Document all LTA fields containing regulated data and classify the associated data types.
- Establish a recurring review cycle to update the inventory.
- Integrate the inventory into governance functions such as retention, DLP, access reviews, and breach response planning.
- Obtain the organization's documented inventory of Long Text Area fields containing regulated data.
- Compare the inventory against Salesforce metadata to confirm all relevant fields are included.
- Review scan results or administrative evidence demonstrating how fields were identified.
- Verify that the inventory includes object name, field API name, data classification, and last review date.
- Determine whether the inventory is maintained and current; missing, outdated, or incomplete inventories indicate noncompliance.
Respondent attests no recurring review of active Public Content links exists. Legacy and accidentally-shared links accumulate as undetected, persistent exposure.
Without periodic review, Public Content links accumulate over time—including legacy links created before security policies were established, links that have outlived their business purpose, and links created through accidental or unauthorized sharing. These forgotten links represent persistent exposure that may go undetected indefinitely. While this control does not prevent initial link creation issues, it provides a governance mechanism to identify and remediate accumulated risk, supporting defense-in-depth and reducing the organization's overall exposure footprint.
- Establish a documented process for periodic review of all
ContentDistributionrecords. - Define a review cadence appropriate to organizational risk tolerance (quarterly recommended as a baseline).
- Create a scanning mechanism (script, report, or tool) to enumerate all active Public Content links.
- Define review criteria to identify links requiring remediation: missing expiry dates, missing passwords on sensitive content, links older than a defined threshold, or links to content no longer requiring external sharing.
- Assign ownership for the review process and remediation actions.
- Maintain records of each review cycle for audit purposes.
- Verify the organization has a documented process for periodic Public Content link review.
- Confirm the review cadence is defined (e.g., quarterly, monthly) and appropriate for the organization's risk profile.
- Obtain evidence of recent review execution (e.g., scan results, remediation records, review meeting notes).
- Verify that reviews include all active
ContentDistributionrecords. - Confirm that identified issues are tracked through remediation or deletion.
- Flag organizations without a documented review process or evidence of recent execution.
Respondent attests they do NOT maintain an inventory + justification list for Named Credentials. Forgotten Named Credentials with valid stored secrets are a hidden third-party access surface.
Without a documented inventory and justification for Named Credentials, undocumented or unjustified configurations may expose the organization to data leakage, unauthorized integrations, or reliance on insecure or untrusted endpoints. However, this control provides governance documentation rather than detection or prevention capability: it supports audit readiness and informed decision-making about authenticated external connections, but other controls are required to detect or prevent actual misuse of approved credentials.
- Add any undocumented Named Credentials to the system of record.
- Document a valid business justification for each Named Credential.
- Remove, disable, or reconfigure any Named Credentials that cannot be justified or that reference untrusted endpoints.
- Establish a recurring reconciliation process to ensure Named Credentials remain fully inventoried and justified.
- Enumerate all Named Credentials using Salesforce Setup, Metadata API, Tooling API, or Connect REST API.
- Retrieve the organization’s system of record for approved external endpoints and integration credentials.
- Compare the Salesforce list to the system of record to confirm all Named Credentials are documented.
- Verify that each documented Named Credential includes:
- The external endpoint URL
- The authentication type (named principal or per-user)
- The business justification for the integration
- Flag any Named Credentials missing from the inventory or lacking justification as noncompliant.
Your roadmap is clear. Implementation is where most security efforts stall.
Our cofounders led product and engineering teams at GrubHub and Rocket, and led all of product and engineering at National Debt Relief — and have built and audited Salesforce environments at every scale, from Series A startups to Fortune 500 scale enterprises.
- Walk through your top remediation priorities together.
- Get a fixed-scope plan to close the highest-impact gaps first.
- Optional: bring us in to ship the fixes and keep them from regressing.
No obligation. We'll review your top three findings with you and tell you whether HelloMavens is the right fit.
Appendix
Methodology, disclaimer, and glossary
Each control was evaluated against the Security Benchmark for Salesforce vd4304e18e6f3747b04b1a7097b3d03a6036e5a3f. Controls scored as pass / fail / inconclusive / not applicable. Category scores are weighted by risk tier (Critical 5, High 3, Moderate 2). Overall score is a weighted average across categories proportional to each category's share of Critical-tier and High-tier controls. Inconclusive and not-applicable controls are excluded from denominators. If any Critical-tier control failed, the overall grade is capped at C regardless of other scores.
Engine version 0.0.0-alpha.10. SBS vd4304e18e6f3747b04b1a7097b3d03a6036e5a3f. Disclaimer version 2026-05-02-placeholder-1.
The HelloMavens Salesforce Security Audit produces a directional security assessment based on questionnaire responses you provide.
This report is not a substitute for a formal security audit, penetration test, or compliance certification.
HelloMavens LLC makes no warranty, express or implied, regarding the completeness, accuracy, or fitness for any particular purpose of this report.
You confirm that you are authorized to submit this information about your organization.
HelloMavens LLC will process your responses solely to generate this report and will not retain raw scan data after report generation. Aggregate, anonymized scoring data may be retained for benchmarking.
Any remediation actions you take based on this report are at your own risk and discretion.
- SBS
- Security Benchmark for Salesforce — an open standard of audit-ready controls maintained at github.com/Salesforce-Security-Benchmark.
- OWASP Top 10
- A standard awareness document for developers and web application security maintained by the Open Web Application Security Project. The 2021 edition is referenced throughout this report.
- HIPAA Security Rule
- U.S. federal regulation governing security standards for protected health information.
- SOC 2
- Service Organization Control 2 — an audit framework focused on the AICPA Trust Services Criteria (security, availability, processing integrity, confidentiality, privacy).
- Inconclusive
- A control that could not be evaluated from the evidence provided (e.g. you answered "I don't know"). Inconclusive controls are excluded from scoring denominators per spec §8.
- Critical-fail cap
- If any Critical-tier control fails, the overall risk grade is capped at C regardless of other scores. Highlights catastrophic risk areas.
- Consultant scan
- An evidence-based audit run by a security consultant against your Salesforce org via the upcoming sbs-scan CLI. Resolves inconclusive controls with high-confidence findings.
What these passing controls protect against, in plain language.
Access controls
SBS-ACS-006 — Documented Justification for `Use Any API Client` Permission
This permission, which bypasses API Access Control entirely, is granted only to justified users, so external apps can't get data access without your vetting and allowlisting first.
SBS-ACS-001 — Enforce a Documented Permission Set Model
Your permission set model is documented and enforced, so privilege sprawl can't accumulate quietly — every authorization construct conforms to the standard, and access reviews actually have something to compare against.
SBS-ACS-002 — Documented Justification for All `API-Enabled` Authorizations
Your API-enabled authorizations are tracked with documented justification, so you can see who has programmatic data access at a glance and prove every permission is intentional during audit reviews.
SBS-ACS-005 — Only Use Custom Profiles for Active Users
Regular users sit on custom profiles, not Salesforce-managed standard ones, so platform updates can't enable new permissions without your approval and least privilege is actually enforceable.
SBS-ACS-007 — Maintain Inventory of Non-Human Identities
You have a current inventory of non-human identities, so automated access can be reviewed, orphaned credentials can be retired, and security incidents involving integrations are actually investigable.
SBS-ACS-008 — Restrict Broad Privileges for Non-Human Identities
Non-human identities aren't carrying broad privileges by default, so a compromised API key or OAuth token can't bypass sharing rules or trigger administrative operations org-wide.
SBS-ACS-011 — Enforce Governance of Access and Authorization Changes
Access changes go through governed approval and audit, so excessive permissions can't slip in unnoticed and incident investigations have a paper trail to follow.
SBS-ACS-009 — Implement Compensating Controls for Privileged Non-Human Identities
Privileged non-human identities have compensating controls layered on top of credentials, so secret leakage isn't a single point of failure for your most sensitive integrations.
SBS-ACS-010 — Enforce Periodic Access Review and Recertification
Access is reviewed and recertified on a cadence, so privilege creep can't quietly accumulate from old roles, temporary projects, and forgotten grants — least privilege stays current over time.
SBS-ACS-012 — Classify Users for Login Hours Restrictions
Privileged accounts are classified for login-hours restrictions, adding a defense-in-depth layer — so even a compromised credential can't be exploited freely during off-hours when detection is thin.
Authentication
SBS-AUTH-001 — Enable Organization-Wide SSO Enforcement Setting
SSO enforcement is on org-wide, so users can't quietly authenticate around centralized identity management — credential-based attacks lose their direct path into your org.
SBS-AUTH-004 — Enforce Strong Multi-Factor Authentication for External Users with Substantial Access to Sensitive Data
MFA is enforced for external users with substantial data access, so a stolen password alone can't sign someone in — phishing and credential-stuffing lose their main path to your sensitive data.
SBS-AUTH-002 — Govern and Document All Users Permitted to Bypass Single Sign-On
SSO bypass exceptions are documented and approved, so off-SSO accounts can't proliferate unnoticed and audit reviews have clean visibility into every authentication exception.
Code security
SBS-CODE-004 — Prevent Sensitive Data in Application Logs
Sensitive data is kept out of application logs, so a low-privilege account with log access can't extract credentials or PII without tripping the access controls on the source objects.
Data protection
SBS-DATA-001 — Implement Mechanisms to Detect Regulated Data in Long Text Area Fields
You can detect regulated data in long-text fields, so PII can't accumulate in unknown locations — GDPR Right-to-Erasure and CCPA deletion requests are actually executable when they arrive.
SBS-DATA-003 — Maintain Tested Backup and Recovery for Salesforce Data and Metadata
Backups are tested, not just configured, so accidental deletion, malicious destruction, or configuration corruption can be recovered from with confidence rather than hope.
SBS-DATA-004 — Require Field History Tracking for Sensitive Fields
Field history tracking is on for sensitive fields, so unauthorized or accidental changes are detectable, investigatable, and accountable rather than silent.
Deployments
SBS-DEP-005 — Implement Secret Scanning for Salesforce Source Repositories
Secret scanning runs on your repos, so hardcoded credentials get caught before they're committed — and the supply-chain path through repository access stays closed to outsiders.
SBS-DEP-001 — Require a Designated Deployment Identity for Metadata Changes
Metadata changes go through a designated deployment identity, so production changes are attributable and unauthorized direct edits stand out instead of blending into routine admin activity.
SBS-DEP-002 — Establish and Maintain a List of High-Risk Metadata Types Prohibited from Direct Production Editing
You've defined which metadata types are high-risk, so deployment governance has clear boundaries — Apex, auth settings, and outbound connectivity can't be edited directly in production.
SBS-DEP-003 — Monitor and Alert on Unauthorized Modifications to High-Risk Metadata
Unauthorized changes to high-risk metadata trigger alerts, so malicious or accidental drift gets surfaced quickly instead of persisting for weeks before being noticed.
SBS-DEP-004 — Establish Source-Driven Development Process
Production configuration ties back to a source-controlled approval trail, so 'what changed, when, by whom' is answerable — incident investigation has a real starting point instead of guesswork.
SBS-DEP-006 — Configure Salesforce CLI Connected App with Token Expiration Policies
Your CLI Connected App has token expiration policies, so a stolen laptop's token files don't grant indefinite access to production — the credential exposure window is bounded.
Foundations
SBS-FDNS-001 — Centralized Security System of Record
Your Salesforce security configurations, exceptions, justifications, and SBS-required inventories live in a centralized system of record, so governance survives staff turnover and audit conclusions can actually be reconstructed from durable artifacts.
File and content sharing
SBS-FILE-002 — Require Passwords on Public Content Links for Sensitive Content
Sensitive Public Content links are password-protected, so an intercepted or accidentally-shared URL alone isn't enough to view the content — there's an authentication layer between link possession and data exposure.
SBS-FILE-001 — Require Expiry Dates on Public Content Links
Public Content links carry expiry dates appropriate to the sensitivity of what they share, so a leaked or forgotten link doesn't extend exposure indefinitely — time-bounded access is governance you can actually enforce.
Integrations
SBS-INT-001 — Enforce Governance of Browser Extensions Accessing Salesforce
Browser extensions accessing Salesforce go through governance, so cloned or malicious extensions can't quietly harvest session tokens or exfiltrate data from authenticated users.
SBS-INT-002 — Inventory and Justification of Remote Site Settings
Your Remote Site Settings inventory is documented with justification, so unvetted endpoints can't be authorized for Apex callouts and you know exactly which external services your code talks to.
Monitoring and detection
SBS-MON-001 — Enable Event Monitoring Log Storage
Event Monitoring log storage is enabled for the event types your security policy requires, so investigations have the telemetry to reconstruct what happened — file downloads, API calls, report exports, all retained instead of silently dropped.
SBS-MON-002 — Retaining Event Logs
Event logs persist for your required retention period, exported when Salesforce native retention falls short and protected from administrative deletion, so a slow-burn incident can still be reconstructed weeks or months after the fact.
Connected apps & OAuth
SBS-OAUTH-001 — Require Formal Installation of Connected Apps
Connected Apps go through formal installation, so security configuration — refresh token lifetimes, session policies, IP restrictions — is enforced by you, not inherited from the external developer.
SBS-OAUTH-002 — Require Profile or Permission Set Access Control for Connected Apps
Connected Apps require explicit profile or permission set access, so a Connected App can't quietly authenticate any user — least-privilege actually applies to OAuth sessions.
SBS-OAUTH-003 — Add Criticality Classification of OAuth-Enabled Connected Apps
OAuth-enabled Connected Apps are classified by criticality, so risk assessment is anchored — you know which integrations touch sensitive data and where to focus governance and monitoring.
SBS-OAUTH-004 — Due Diligence Documentation for High-Risk Connected App Vendors
High-risk Connected App vendors have documented due diligence on file, so onboarding decisions are informed and contractual obligations match the access being granted.
Security configuration
SBS-SECCONF-001 — Establish a Salesforce Health Check Baseline
You have a Health Check baseline, so configuration drift is actually detectable — you can tell when settings diverge from intentional decisions versus accumulated neglect.
SBS-SECCONF-002 — Review and Remediate Salesforce Health Check Deviations
Health Check deviations get reviewed and remediated, so configuration drift doesn't quietly weaken your security posture between audits — your baseline stays meaningful over time.
For these controls we don't have enough evidence to call a pass or a fail. Each one has a path to a definitive answer below.
SBS-CODE-001 — Mandatory Peer Review for Salesforce Code Changes
You answered "I don't know" to: "Does every Apex or Lightning code change get peer-reviewed and approved before it goes to production?". A consultant scan would resolve this with evidence-based scoring.
SBS-CODE-002 — Pre-Merge Static Code Analysis for Apex and LWC
You answered "I don't know" to: "Is there an automated security scanner (e.g., Salesforce Code Analyzer, PMD) that checks every code change BEFORE it gets merged?". A consultant scan would resolve this with evidence-based scoring.
SBS-CODE-003 — Implement Persistent Apex Application Logging
You answered "I don't know" to: "Do you have an Apex logging framework that writes events to a permanent place (not just the temporary "debug log")?". A consultant scan would resolve this with evidence-based scoring.
SBS-CPORTAL-005 — Conduct Penetration Testing for Portal Security
You answered "I don't know" to: "Have you had a security professional run penetration tests against your portal — both before launch and on a regular schedule since?". A consultant scan would resolve this with evidence-based scoring.
SBS-MON-005 — Monitor API Usage Against Limits
You answered "I don't know" to: "Do you continuously track how close you are to your daily Salesforce API limit, with proactive alerts at a defined utilization threshold (e.g., 80-90%) before you hit the cap?". A consultant scan would resolve this with evidence-based scoring.