Skip to main content

Assess Apex Code Quality in Any Client Org

Run a PMD scan against inherited Apex code to surface security vulnerabilities, complexity problems, and best practice violations — with findings grouped by rule and prioritized by severity.

Written by Pablo Gonzalez

Code Quality is where you go when you need to do more than browse the org — when you need a structured, reproducible assessment of specific technical domains. The first assessment available is the Apex Quick Scan. More will follow.


Quick Scan

The Quick Scan runs PMD, the industry-standard static analysis tool for Salesforce Apex, against every Apex class in the connected org's index. The scan takes a few minutes and runs in the background — you can navigate away and come back.

What gets checked

The scan uses a curated PMD ruleset designed around one principle: actionable issues over style opinions. Noisy rules that generate false positives or that don't reflect real problems in Salesforce orgs (like requiring a logging level on every debug statement, or banning global modifiers needed for managed package development) are excluded. What remains is genuinely worth looking at:

  • Security — All PMD security rules, without exception. SOQL injection, unsafe DML operations, hardcoded credentials, and other patterns that create real vulnerability.

  • Best Practices — Core rules covering Apex unit test quality, bulk-safe patterns, and trigger best practices. The rules excluded from this category are ones that are either too prescriptive for org code or don't apply outside managed package development.

  • Design — Complexity and structural issues: classes with too many responsibilities, methods that are too long, cyclomatic complexity that will make future changes risky.

If a violation appears in the results, it's there because it matters — not because PMD flagged every line that doesn't conform to a style guide.

See our ruleset

<?xml version="1.0" encoding="UTF-8"?>

<ruleset xmlns="http://pmd.sourceforge.net/ruleset/2.0.0"

xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

name="HappySoup Apex Ruleset"

xsi:schemaLocation="http://pmd.sourceforge.net/ruleset/2.0.0 http://pmd.sourceforge.net/ruleset_2_0_0.xsd">

<description>

Focused ruleset for Salesforce Apex code analysis.

Prioritizes actionable issues over style opinions.

</description>


<!-- Best Practices: Core rules minus the noisy ones -->

<rule ref="category/apex/bestpractices.xml">

<exclude name="AvoidGlobalModifier"/> <!-- Needed for managed packages -->

<exclude name="ApexAssertionsShouldIncludeMessage"/> <!-- Too strict for tests -->

<exclude name="DebugsShouldUseLoggingLevel"/> <!-- Debug spam -->

<exclude name="ApexUnitTestClassShouldHaveRunAs"/> <!-- Not always needed -->

</rule>


<!-- Design: Complexity and structure issues -->

<rule ref="category/apex/design.xml">

<exclude name="AvoidDeeplyNestedIfStmts"/> <!-- Covered by complexity -->

</rule>


<!-- Security: All security rules (critical!) -->

<rule ref="category/apex/security.xml"/>


</ruleset>

Reading the results

The results summary shows four numbers: files scanned, files with issues, total violations, and clean files. The ratio of files with issues to total files is the most useful headline: an org where 60% of Apex files have at least one violation looks very different from one where it's 10%, even if the total violation count is similar.

Violations are grouped by rule, not by file. This is intentional — it lets you see which problems are systemic (a rule that appears in 40 files is a pattern, not an outlier) versus isolated. Each rule card shows:

  • The rule name, linked to PMD's documentation for that rule

  • The category (Security, Best Practices, Design, Error Prone, Performance)

  • The priority: Critical, High, Medium, Low, or Info

  • How many files it appears in

Expand a rule to see exactly which files are affected, the line number of each violation, and PMD's description of what the violation is and why it matters.

Use the category sidebar to filter to a specific domain. Start with Security — any Critical or High finding there is worth raising with the client regardless of the broader code quality picture.

Scan history

Every scan is saved. You can run a new scan after a code deployment and compare the violation count to the previous result — a meaningful way to track whether code quality is improving or degrading over time.


What's coming

Code Quality is built to host multiple targeted assessments. The Quick Scan is the first. Planned additions include test coverage analysis, trigger framework audit, and async architecture review. If there's a specific assessment that would be useful for your work, let us know.


Practical scenarios

Scoping a code modernization engagement. Run the Quick Scan before committing to a timeline. The violation count by category tells you where the real complexity is — a high number of Design violations in a few key classes is a very different project than widespread Best Practices issues across hundreds of smaller files.

Identifying delivery risk before a sprint. A client wants to add new logic to a heavily-used trigger. Run the scan to check whether that trigger is already flagged for complexity violations. High cyclomatic complexity in the file you're about to change is a signal to slow down and refactor first.

Security findings for a client presentation. Filter to the Security category. Any Critical or High PMD security finding is a concrete, documented vulnerability with a named rule and an affected file. That's something you can put directly into a client findings report.

Establishing a baseline before a handover. Before handing a project back to an internal team or a new SI, scan the org and share the results. It gives the incoming team a clear picture of what they're inheriting and sets documented expectations about what was left addressed versus intentionally deferred.

Tracking improvement over time. Run a scan at the start of an engagement and again at the end. The delta in violation count is a concrete, defensible measure of the technical improvement your team delivered.

Did this answer your question?