May 7, 2026
SQL SIG Monthly Meeting
The kickoff meeting for the revived SQL SIG focused on the highest-voted topics from the group and reopened the forum for practical STAR-centered SQL discussion.
The meeting showed how SQL teams can govern the work around STAR, not just report from it. The strongest examples were API-driven system synchronization, a cleaner STAR-centered warehouse foundation, rule-based maintenance of STAR role tables, and SQL-controlled automation for logic that becomes difficult to express cleanly in Task Centre or BPA. Across all four topics, the common thread was SQL as the controlled operating layer where firm-specific rules, state, logging, and execution can live close to the data.
Monthly Meeting
1h 10m
DurationMultiple Firms
RepresentedRecording
AvailableWhat this meeting covered
API-based integrations: STAR-to-HubSpot synchronization and ADP-to-STAR worker-to-staff injection
The first topic introduced a SQL-first integration pattern: stored procedures hold the integration contract, temp tables define the requested shape, and a small .NET execution layer supplies the capabilities SQL Server does not natively expose cleanly. The result is a way for SQL teams to call APIs, handle paging, insert, update, delete, parse responses, render reports, process files, and use .NET libraries without turning every integration into a conventional application-development project.
What the demo established
This was the lead topic and the clearest technical headline of the meeting. Many accounting-firm integration workloads begin and end in SQL, but vendor APIs usually push the work into a separate application-development model: Visual Studio projects, compiled DLLs, external services, SQL CLR ceremony, language extensions, and code assets that live outside the database. The demo presented a different model: keep the integration logic visible and reusable from SQL, while letting .NET do the parts SQL is not designed to do.
HubSpot was used as the example because it is easy to test with a free API key, but the pattern was broader than HubSpot. Any vendor system with an API can be treated as an external source or target. The SQL developer defines the rows and columns needed, calls a stored procedure, and receives the result back into a table-shaped structure.
The basic pattern was:
The HubSpot company demo started with a simple temp table containing company name, then added HubSpot’s primary key, then added additional properties such as city. A separate metadata procedure retrieved the available HubSpot company properties, including custom fields, so the SQL developer could decide what to request and how to store it downstream.
The deeper mechanism was an XML-based representation of C#/.NET code embedded inside a stored procedure. The procedure sends that XML to an external .NET executable, the executable runs the logic, and the result comes back to SQL. From the SQL side, the stored procedure remains the visible, shareable, editable asset. The core dependency model was intentionally small: one core stored procedure plus one executable that can execute the XML-described .NET work.
The demo covered the full lifecycle, not just reading data:
The ADP-to-STAR worker-to-staff injection scenario was discussed as the same architectural pattern, though it was not demoed because there was no available ADP test environment or API key. The important takeaway was not a finished ADP connector; it was the reusable integration approach that would make that kind of connector feasible.
The underlying case
The real business issue is not simply calling an API. The deeper issue is controlled cross-system movement. STAR may be the authoritative practice-management system. HubSpot may hold company, contact, sales, or marketing data. ADP may hold workers before they exist as STAR staff. Other systems may hold workflow, organizers, tax status, signatures, documents, or client metadata. The firm needs those systems to communicate without losing control of the rules that determine what should move, when it should move, and how exceptions should be handled.
The integration pattern puts the important control points in SQL:
That is the key distinction. The point is not to write everything in SQL. The point is to keep the integration contract, state, scheduling, staging, reconciliation, and operational control in SQL, while borrowing .NET for the capabilities SQL does not naturally provide.
Why it matters to STAR firms
STAR-centered firms often need integrations that are too specific for generic connectors. A HubSpot company may or may not become a STAR client. A STAR client field may or may not be allowed to update the CRM. An ADP worker may require validation, mapping, and review before becoming STAR staff. These are firm rules, not just API calls.
That makes SQL a strong control layer. The firm can stage the data, apply its own rules, review exceptions, log each attempt, and rerun or repair individual records without treating every integration as a separate application.
Technical strengths of the pattern
The strongest technical idea is the temp table as contract. The caller defines the shape. The stored procedure fills it. That gives SQL developers a natural interface while keeping reusable API plumbing separate from local business logic.
The second strength is the stored procedure as the deployable integration asset. Instead of distributing a full application project, the reusable logic can be shared as text, reviewed like SQL, and versioned with the rest of the firm’s database work.
The third strength is plumbing reduction. Authentication headers, paging, serialization, retry loops, object IDs, trace IDs, logging, file handling, email attachments, and response parsing can become reusable building blocks rather than repeated project overhead.
The fourth strength is extensibility. The same bridge that calls an API can also render SSRS reports, process files, parse HTML, use NuGet libraries, or perform other .NET work while leaving SQL in charge of the state and rules.
STAR-centered data warehouse foundations and practical SQL techniques
The second topic focused on building a stable SQL foundation over STAR: extract clean source structures, materialize them into durable warehouse tables, and expose flattened Client, Job, and Staff objects for reporting, administration, and downstream automation. The strongest design points were zero/default rows, role-ID resolution back to Staff, the separation between extraction and consumption, and the recommendation to read from STAR once before building richer structures away from the live STAR database.
What the demo established
The data warehouse section presented an opinionated but practical STAR-centered foundation. The lowest layer extracts and cleans STAR data before it is used for reporting, flattening, or multi-system combination. Table functions were preferred in the demo because they can sit lower in the infrastructure and are less likely to become casual reporting objects, though views can serve the same role when that fits the firm’s standards.
The first design principle was cleanup and standardization:
The second design principle was the zero/default row. STAR records generally start at positive IDs, so a zero row can represent unknown, not applicable, or unassigned. A job without a manager, for example, can point to ManagerID 0 instead of forcing every downstream query to use left joins. That small convention supports inner joins, referential integrity, indexed-view possibilities, and predictable row behavior.
The third design principle was materialization. The extraction object is not the warehouse by itself. It is used to populate physical warehouse tables on the firm’s chosen refresh schedule. Analysts and reports should eventually hit the warehouse copy, not repeatedly hit the live STAR database.
The layered model was:
The demo also showed an important STAR-specific abstraction: Partner, Manager, and Supervisor IDs are not the same as Staff IDs. The suggested warehouse model resolves those role-specific identifiers back to the actual Staff ID so downstream users can treat the person consistently across reporting, integrations, and administrative work.
The broader architecture can extend beyond STAR. The suggested model was one database per source system, plus a master database that combines the source-specific structures into the firm’s broader analytical layer. That lets STAR expertise stay in the STAR-centered database, other vendor expertise stay in their own databases, and the exposed reporting layer draw from all of them.
Participant refinements
Kent raised a practical point about chargeable versus non-chargeable status. To determine that state from WIP, a query may need to move from WIP to Job to Job Type. His recommendation was to pull the chargeable/non-chargeable attribute down onto the Job object, and potentially onto WIP-facing structures, because it is too important to require repeated multi-step joins. The discussion accepted that as a good example of smart denormalization.
Brad raised the idea of including audit trail tables so the warehouse can support slowly changing dimensions and point-in-time analysis. The response connected that to an “as of date” pattern, where a table function can return either the latest version of a record or the version that was true at a selected point in time.
A key performance point also emerged: when the warehouse lives outside the STAR server, the extraction should read STAR as directly and efficiently as possible, store the result locally, and then build richer warehouse objects from the local copy. That keeps heavy shaping and reporting work away from the live STAR database.
The underlying case
This was not only about reporting convenience. It was about creating a stable analytical model over STAR.
STAR is a transactional practice-management system. It has real operational structure, but that structure is not always the easiest surface for reporting, administration, dashboards, or integrations. Role identity may be split across tables. Some dimensions may be nullable. Some lookups may require several joins. Different staff members may write similar queries differently and get slightly different answers.
The warehouse pattern gives the firm a controlled SQL interpretation of STAR:
The deep value is not simply that Client, Job, and Staff objects exist. The value is that they become the canonical firm objects that other reporting, administration, security, integration, and automation processes can reference.
Why Client, Job, and Staff are the right starting point
For a STAR-centered data warehouse, Client, Job, and Staff form the minimum viable business spine. They answer the core questions that almost every later process depends on.
Client answers:
Job answers:
Staff answers:
Once those three objects are clean, the rest of the firm’s SQL work becomes easier: WIP, AR, billing, collections, workflow, routing, dashboards, exception reports, permissions, cross-system matching, and operational automation.
Technical strengths of the pattern
The strongest design move is the separation between extraction and consumption. The extraction layer stays close to STAR and focuses on clean, reliable source capture. The consumption layer can then flatten, denormalize, rename, and combine aggressively because it is no longer hitting STAR directly.
The zero/default row design is stronger than it may look. It is a small modeling convention that prevents many downstream compromises. It supports inner joins, referential integrity, indexed views, and predictable row counts.
The role-ID abstraction is equally important. STAR may separate Partner, Manager, and Supervisor identity from Staff identity, but the firm usually cares about the person. Resolving those role IDs back to Staff ID prevents downstream reporting from treating the same person as separate identities.
The warehouse also creates a safer basis for administrative SQL operations. Administrators can generate exception lists, controlled updates, cross-system comparisons, and point-in-time analyses from stable objects instead of rediscovering STAR internals each time.
Rule-based maintenance of Partner, Manager, and Supervisor tables
The third topic addressed a very specific STAR maintenance problem: keeping tblPartner, tblManager, and tblSupervisor aligned with the Staff table. The proposed approach was to derive role-table membership from Staff rules, allow overrides where needed, preview the intended state, and synchronize the STAR role tables through lightweight trigger logic or trigger-called stored procedures.
What the demo established
This topic came from a common transition point for firms that moved from CCH ProSystem fx Practice Management into STAR. Separate Partner, Manager, and Supervisor tables may not have existed in the same way before, which means the firm inherits another set of STAR structures that need to be maintained accurately.
The proposed pattern treats Staff as the source of truth. A table function or view creates a virtual representation of who should exist in each role table, and synchronization logic compares that intended state against the actual STAR role tables.
The pattern was:
The important design choice was to separate “gather the data” from “use the data.” The rule-producing object can be run, inspected, and compared before it changes anything. That gives the administrator a clear preview of what the STAR role tables should look like before synchronization is applied.
The group also discussed trigger discipline. A trigger can keep role tables aligned immediately when Staff changes, but it must be small, fast, and testable. Kent suggested keeping the trigger itself simple and pushing heavier logic into a stored procedure. The response agreed and added that trigger-related custom procedures can be organized into a dedicated schema so they are clearly separated from normal STAR objects.
The underlying case
This is an identity-governance problem disguised as a table-maintenance problem.
tblPartner, tblManager, and tblSupervisor are not just lookup tables. They influence assignment, filtering, reporting, workflow, ownership, and possibly permissions. When those tables drift from Staff, the firm’s operational view of who can own work becomes less reliable.
A rule-based sync gives the firm a more controlled model:
The result is a role-maintenance process that behaves like an operating control instead of a manual cleanup task.
Why the rule layer matters
The most important part of the pattern is not the trigger. It is the virtual rule layer that shows what the role tables should look like before anything is applied.
That layer can answer:
Without that layer, the sync becomes hidden logic. With it, the firm can see the intended state, compare it to STAR, and understand why each person is being added, updated, excluded, or left alone.
Technical strengths of the pattern
The strongest technical decision is making the desired role state queryable before it is applied. A virtual Partner list, Manager list, and Supervisor list can be compared against the actual STAR tables to produce discrepancy reports, dry runs, and repair scripts.
The second strength is consistency. Once the rules are expressed in one place, the firm no longer depends on different people remembering when to add someone to tblPartner, tblManager, or tblSupervisor.
The third strength is reusability. Partner, Manager, and Supervisor synchronization follow the same basic pattern with different filters and target tables, which makes the model easier to maintain once the firm’s rules are clear.
The fourth strength is auditability. Inserts, updates, exclusions, and exceptions can be logged with the rule or override that produced them, giving the firm a history of how role membership changed and why.
Task Centre / BPA pain points: advanced if/then logic that may be easier to handle directly in SQL
The fourth topic focused on the point where visual automation designers stop being the clearest place for the logic. Task Centre and BPA remain useful for accessible, business-maintainable flows, but complex branching, loops, report generation, attachment assembly, parameter handling, retries, and exception logic can be easier to govern from SQL. The example centered on rendering SSRS reports, storing report bytes as attachments, and sending configured emails from SQL-driven logic.
What the demo established
This topic came near the end of the meeting, but it tied directly back to the larger theme. SQL can act as the operating layer when the work is too data-heavy for a visual designer to express comfortably.
The example shown was SQL-controlled SSRS report rendering. Instead of manually running reports or forcing every condition into a designer, the process can be driven by SQL tables and stored procedures.
The pattern included:
The broader point was that the same SQL-plus-.NET bridge discussed in the API topic can support more than APIs. It can also handle report rendering, file-based processing, HTTP calls, HTML retrieval, HTML parsing, third-party libraries, NuGet packages, and other operations that SQL teams often need around STAR.
The underlying case
This is an orchestration problem.
Task Centre and BPA are valuable when the workflow is visible, understandable, and business-maintainable. But many accounting-firm automations are data-heavy processes. They evaluate many records, apply firm-specific rules, generate outputs, handle exceptions, and need a reliable audit trail.
Those processes often involve:
SQL is often the cleaner home for that part of the work because it can express set logic, joins, filters, grouping, parameters, transactions, staging, previews, reconciliation, and logging directly.
The right tool for the right part of the workflow
The strongest framing is not that SQL replaces Task Centre or BPA. The better framing is that each tool should own the part of the workflow it expresses best.
Task Centre or BPA is a strong fit when:
SQL-controlled automation is a strong fit when:
That distinction keeps the message practical. Visual workflow tools remain useful for visible process coordination. SQL becomes the better control layer when the workflow is really a data-driven operating process.
The SQL-powered report and email pattern
The report/email example is a strong case because many accounting-firm automations ultimately produce artifacts: reports, PDFs, workpapers, extracts, exception lists, client notices, internal reminders, or partner summaries.
When those artifacts are generated from SQL-controlled configuration, the process becomes repeatable and inspectable. The firm can store the rules and delivery options in tables, render the outputs, review the staged work, and then send from a controlled process.
A production version could include:
That makes the automation easier to test and easier to trust. The firm can review the work queue, rendered files, recipients, and logs before enabling the final delivery step.
Beyond reports
The meeting also showed that the same approach can extend beyond SSRS. Once SQL can safely borrow .NET capabilities, many operations become available without turning each one into a full application project.
The point is not that SQL should do everything. The point is that SQL can remain the center of state, configuration, and audit while .NET handles the technical capabilities SQL does not naturally have.
Technical strengths of the pattern
The strongest technical strength is control. SQL can define the population, rules, parameters, outputs, and sequence of work in a way that is easy to test with ordinary queries.
The second strength is table-driven configuration. Report paths, recipient groups, service-line rules, partner rules, retry settings, and output formats can live in tables instead of being scattered through a visual flow or hard-coded script.
The third strength is observability. A SQL-driven process can log run IDs, input rows, generated files, recipients, success and failure states, error messages, retry attempts, and reconciliation results.
The fourth strength is reuse. Once the firm has reliable procedures for rendering reports, saving attachments, composing emails, sending messages, and logging results, those pieces can be reused across many automations.
The fifth strength is safer testing. SQL can support preview mode, test recipients, approval lists, staged execution, and review reports before anything leaves the system.

SQL as the controlled operating layer around STAR.
Across all four topics, SQL was treated less as a reporting endpoint and more as the place where firm-specific rules can be expressed, state can be managed, activity can be logged, and execution can be made repeatable across the systems surrounding STAR.
Govern Rules
Manage State
Capture Logs
Execute Reliably
