Some interesting suggestions here, which all seem to show misunderstanding about how log backups work. A log backup contains ALL transaction log generated since the previous log backup, regardless of what full or differential backups are taken in the interim. Stopping log backups or moving to daily full backups will have no effect on the log backup sizes. The only thing that affects the transaction log is a log backup, once the log backup chain has started.
The only exception to this rule is if the log backup chain has been broken (e.g. by going to the SIMPLE recovery model, reverting from a database snapshot, truncating the log using BACKUP LOG WITH NO_LOG/TRUNCATE_ONLY), in which case the first log backup will contain all the transaction log since the last full backup - which restarts the log backup chain; or if the log backup chain hasn't been started - when you switch into FULL for the first time, you operate in a kind of pseudo-SIMPLE recovery model until the first full backup is taken.
To answer your original question, without going into the SIMPLE recovery model, you're going to have to suck up backing up all the transaction log. Depending on the actions you're taking, you could take more frequent log backups to reduce their size, or do more targeted database.
If you can post some info about the maintenance ops you're doing, I can help you optimize them. Are you, by any chance, doing index rebuilds followed by a shrink database to reclaim the space used by the index rebuilds?
If you have no other activity in the database while the maintenance is occuring, you could do the following:
- make sure user activity is stopped
- take a final log backup (this allows you to recover right up to the point of maintenance starting)
- switch to the SIMPLE recovery model
- perform maintenance - the log will truncate on each checkpoint
- switch to the FULL recovery model and take a full backup
- continue as normal
Hope this helps - looking forward to more info.
Thanks
[Edit: after all the discussion about whether a full backup can alter the size of a subsequent log backup (it can't) I put together a comprehensive blog post with background material and a script that proves it. Check it out at https://www.sqlskills.com/blogs/paul/misconceptions-around-the-log-and-log-backups-how-to-convince-yourself/]
Something is changing your database!
Things like index rebuilds or defrags will cause page changes. Changes that are rolled back may have changed pages, so they count as well.
In addition differential backups are considered 'fuzzy' and so will have transaction log data in the backup, which it needs for consistency.
Paul Randal wrote a very cool script a while back that will tell you how many extents have changed since your last full backup, so you can use this to calculate how big your differential is going to be.
In addition, you should be able to use this script to tell you what pages have changed in your db. This may help you solve your changing data mystery.
Best Answer
When the database recovery model is set to FULL or BULK-LOGGED, the database and the transaction log are handled completely separately.
Thus, backing up the database does not free up the log; you must still back up the log as well to re-use it.
Vice versa, if the log file is too large for your liking - but see below - resizing the data files does nothing; you must explicitly back up and then shrink the physical log.
HOWEVER, do not use auto-grow just because it looks convenient! There are serious penalties for having a multi-gigabyte log file that was grown in 10MB increments, well-documented at the microsoft website.
In a nutshell, growing or adding to the log will divide the space added into a number of new virtual logs, from a few to a dozen - every single time this happens.
When your transaction log has grown from 10MB to 1GB in 10MB increments, chances are it will be fragmented into a thousand virtual logs; this hurts performance.
My advice is to calculate - or observe - how much transaction log you need between log backups, double that, and set your log to that size.
Then disable autogrowth and implement monitoring on log utilization so it warns you when it gets above 90% or so.
It is far better to manually add a few 100MB every once in a while, thus keeping the fragmentation to a minimum.