I'm looking for a sanity check. Service broker and Event Notifications are new to me, so it is possible I'm either abusing or incorrectly using the technology. I'm experimenting with using Event Notifications and Server Broker to log error messages. The process works pretty well as long as I don't do something stupid such as stressing the process by calling raiserror in a loop of a 100000 - twice. It processed for a while, but eventual either SQL Server terminates (sev 25) or I had to force the SQL service to stop by killing it's process via task manager. I could not connect to SQL Server or stop the service normally.
I have several general questions. I think they are good questions. Any thoughts or suggestions would be appreciated.
1) I'm looking into generating deadlock and blocking email alerts. It appears that Event Notification is the way to go since there are very few events and I can respond to the event with an email. Is there a better method to automate blocking and deadlock email alerts that include event details?
2) If I wanted to retain some history of errors for the developers, should Event Notifications be avoided if a high number of errors can be generated by a badly formed T-SQL? Avoid TRC_ERRORS_AND_WARNINGS and USER_ERROR_MESSAGE? (I could tell the developers not to be stupid, but why would that stop them if it does not stop me?)
3) Is there a way to efficiently filter a Event Notification from entering the queue before the Receive statement is called? I get some events that I throw away after receiving them. For example, perhaps I want all events from all non-system databases without having to add a notification for each single non-system database. Or I want all errors with severity 11-17 only?
4) Is there a trick to filter out events from the procedure activated by the event. I tried using raiserror to debug a procedure without thinking. The result was that the queue never was empty because the processing produced more events to process. As a result, I don't use raiserror and use a try-catch to avoid raising errors in the procedure activated.
5) I can receive one message at a time using local variables or receive a batch of messages using a local table. Is a small batch the best way even if there is memory pressure?
6) In the activated procedure I continue processing in a loop until there are no more messages. This seems to be the most efficient. Is this always the case? Should I exit the procedure after a set number (large) of messages have been received? The procedure would activate again to continue processing?
7) Is there any point to using the MAX_QUEUE_READERS setting when processing event notifications? Should it be 1?
8) I currently get the next conversation group id and process its messages within a transaction. Is this a bad idea with event notifications? Should I just call Receive and get the next batch? I don't really care if I lose some messages if things are going badly. Should I avoid wrapping the receive in a transaction?
9) I could run a trace that starts with SQL Server; however, I think my only choice is to log to a file. Is there a way to trace to a table using SQLTrace without running profiler? I would like to automate the process and have the data in a table so that it can be easily queried and parsed for each database/team.
10) Is there a way to fix my process and handle 100000 messages a minute? Is there a way to skip messages when it gets to busy? Can the query generating the messages get throttled - perhaps along with the query designer - before the server gets into trouble? Query the memory used by a queue and drop/create the queue with a delay if there is an issue?
I'm using a single CPU test machine with only 1 GB and 2 GB pagefile and a single disk. OS: Windows 2003 R2 SP2. SQL: 9.0.3054 and on 9.0.3228 (update package 6 just came out today). I could add memory, but I think that would just permit me to queue more message before running into trouble.
I'll add code shortly - length limit.