You might want to look at the answer to this similar question here:
https://stackoverflow.com/questions/11329823/add-where-clauses-to-sql-dynamically-programmatically
We've found that a SPROC which takes in a bunch of optional parameters and implements the filter like this :
CREATE PROC MyProc (@optionalParam1 NVARCHAR(50)=NULL, @optionalParam2 INT=NULL)
AS
...
SELECT field1, field2, ... FROM [Table]
WHERE
(@optionalParam1 IS NULL OR MyColumn1 = @optionalParam1)
AND (@optionalParam2 IS NULL OR MyColumn2 = @optionalParam2)
will cache the first execution plan it is run with (e.g. @optionalParam1 = 'Hello World', @optionalParam2 = NULL
) but then perform miserably if we pass it a different set of optional parameters (e.g. @optionalParam1 = NULL, @optionalParam2 = 42
). (And obviously we want the performance of the cached plan, so WITH RECOMPILE
is out)
The exception here is that if there is ALSO at least one MANDATORY filter on the query which is HIGHLY selective and properly indexed, in addition to the optional parameters, then the above PROC will perform fine.
However, if ALL the filters are optional, the rather awful truth is that parameterized dynamic sql actually performs better (unless you write N! different static PROCS for each permutation of optional parameters).
Dynamic SQL like the below will create and cache a different plan for each permutation of the Query parameters, but at least each plan will be 'tailored' to the specific query (it doesn't matter whether it is a PROC or Adhoc SQL - as long as they are parameterized queries, they will be cached)
So hence my preference for :
DECLARE @SQL NVARCHAR(MAX)
-- Mandatory / Static part of the Query here
SET @SQL = N'SELECT * FROM [table] WHERE 1 = 1'
IF @OptionalParam1 IS NOT NULL
BEGIN
SET @SQL = @SQL + N' AND MyColumn1 = @optionalParam1'
END
IF @OptionalParam2 IS NOT NULL
BEGIN
SET @SQL = @SQL + N' AND MyColumn2 = @optionalParam2'
END
EXEC sp_executesql @SQL,
N'@optionalParam1 NVARCHAR(50),
@optionalParam2 INT'
,@optionalParam1 = @optionalParam1
,@optionalParam2 = @optionalParam2
etc. It doesn't matter if we pass in redundant parameters into sp_executesql
- they are ignored.
It is worth noting that ORM's like Linq2SQL and EF use parameterized dynamic sql in a similar way.
The awful 1 == 1
hack can also be avoided if you keep track of whether any predicates have yet been applied or not, and then conditionally apply the first AND
only on the second and subsequent predicates. If there are no predicates at all, then WHERE
disappears as well.
Note that despite the dynamic query, we are still parameterizing
the filters, if present, so we have at least a first line of defence against SQL Injection attacks.
I prefer to keep plenty (not all, but plenty) of the logic in the database for just a ton of reasons (and without getting into the whole "who needs stored procedures nowadays" argument).
Do this in your stored procedure.
These look like simple computations. Also use a transaction in your sproc to manage the writes to both tables, unless it is okay if your database becomes inconsistent (sometimes the write might succeed in one table, but not the other). And don't listen to the guys who try to tell you that "real programmers write their own rollback code," because that's just silly. :-) The only reliable rollback code is the rollback code inside the database engine itself, and it's easy to use.
If you do these calculations in your imperative code outside the database and use two stored procedures to write the data, you still have to consider utilizing a database transaction unless it is okay for your database to become inconsistent. So now (if you do this outside the database) all of a sudden you're up to at least 4 round-trip calls to the database instead of 1:
- Begin TX
- Write table 1
- Write table 2
- Commit TX
If the writes are small, it is entirely possible that you'll have more overhead in the network round trips than in the actual work done by the database. Even though you can use the same connection to call both stored procedures (must, if you use a transaction), you still have to make multiple, separate calls to the database server across the network.
Finally, if the logic of the calculations changes, and you're doing those calculations in your code outside the database, then you have to change that code, recompile it and redeploy the updated program. But if you have those calculations in your stored procedure, you can rebuild that one stored procedure without having to redeploy (re-install?) your entire application.
All of these points are exacerbated if we're talking about a thick client desktop app--which you did not specify--because that means you have to re-install the updated app on every workstation, and while you have it installed on some workstations but not on others, you have different workstations doing different things to your database. Also, depending on your source control and build practices, you could be looking at a major operation to go back in time and branch your repository to make some changes in your data access layer, not to mention the potential work involved in merging the changes in the branch forward into the DAL in your main source line.
Do this in the stored procedure, because that is honestly the right place to do it.
Best Answer
Using stored procedures is one way, and has been in widespread use for many years.
A more modern way to interact with SQL Server databases from C# (or any .NET language) is to use Entity Framework. The advantage of Entity Framework is that it provides a higher level of abstraction.
To quote from Microsoft (https://msdn.microsoft.com/en-us/data/jj590134 ):
The ADO.NET Entity Framework enables developers to create data access applications by programming against a conceptual application model instead of programming directly against a relational storage schema. The goal is to decrease the amount of code and maintenance required for data-oriented applications. Entity Framework applications provide the following benefits:
The use of an ORM vs Stored Procedures involves tradeoffs, particularly in terms of security and where the logic resides.
The "classic" approach to development with SQL Server is to have the application logic reside in stored procedures and programs only given security rights to execute stored procedures, not update tables directly. The concept here is that stored procedures are the business logic layer for the application(s). While the theory is sound, it has tended to fall out of favor for various reasons, being replaced by implementing the business logic in a programming language like C# or VB. Good applications are still implemented with a tiered approach, including separation of concerns etc. but are more likely to follow a pattern like MVC.
One downside of implementing logic in the ORM rather than the database is ease of debugging and testing data integrity rules by those responsible for the database (DA or DBA). Take the classic example of transferring money from your checking to savings account, it is important that this be done as an atomic unit of work, in other words sandwiched in a transaction. If this sort of transfer is only allowed to be done through a stored procedure it is relatively easy for the DA and auditors to QA the stored procedure.
If on the other hand this is done via an ORM like Entity Framework and in production it is discovered that on rare occasions money is taken from checking but not put into savings debugging may be far more complex, particularly if multiple programs are potentially involved. This would most likely be an edge case, perhaps involving peculiar hardware issues that need to occur in a particular sequence etc. How does one test for this?