Automatically Applying Windows Updates on Servers.

Automatically Applying Windows Updates on Servers.

It is important to apply updates to your Windows server. These updates help to keep your system secure and running at peak performance.

I do not recommend automatically applying these updates on your server, though. It is much better to set Windows updates to either “notify”, or “download updates and notify”. The biggest problem with automatically installing updates is having the server reboot. This may occur automatically. Even if the time for installing updates is set to sometime in the middle of the night, when you are off the system, such a reboot may occur later, when you are on. I have seen this in cases where the updates took a long time to install. Also, however, the reboot may be delayed. If for example, a user is left logged in on the console, the reboot may be delayed until after the user is logged off. Of course, that would typically occur when you are open and using your server. Also, some updates are installed when shutting down or starting up. I have seen such updates take an hour, or more, to install. Your server is unavailable during that time.

In addition to inconvenient restarting of your server, occasionally an update may require additional attention. Although it is rare, there may be recommended setting changes, or other manual steps. It is best to know about these and be prepared to address them, instead of suddenly being confronted with a server that is not working as desired.

My recommendation, therefore, is to not automatically apply updates on your servers. Instead, make it a frequent task to check the updates that are released, and determine which should be applied. Then, after checking for any possible issues with your setup, or software, to install those updates at a time when the server may be rebooted if needed. Then check your system after they are installed, and rebooted if needed, to make sure your system and software are operating properly.

While that is my recommendation for servers, it applies to a lesser degree to your workstations, also. For workstations, you should determine if you can tolerate an unanticipated reboot, with some possible downtime. If resources permit, I recommend handling your workstations the same as your servers, and reserve updates for manual processing.

Dave.

Purging Distributions

Purging Distributions

As you use your Counterpoint SQL system, you will accumulate data. Sales history, purchasing, and receiving, all of the history is accumulated, and these tables can grow to be quite large.

One table that can grow very fast, is the distributions table. Everything that is posted will create distributions. Depending on the settings used, it can be many records for a single posting. In a short time, this can build into thousands, or millions, of records. Moreover, most people do not need most of the data in this table. If your interface to another accounting system, once the distributions are sent to that other system, one really has no need for them in Counterpoint. Also, many do not use the accounting function, either within Counterpoint, or interfaced to another accounting system. In that case, those records are never used and are just taking up space.

Fortunately, distributions are one of the areas that Counterpoint has a purge function. From the main menu, it is under System -> Accounting -> Utilities -> Purge Distributions. Under the event tab, select the Post Date range that you want to purge. You may have to use the customize function to find the post date option. Make sure that you select a range that will leave any records that you may want. In a lot of cases, leaving 30 days of distributions is appropriate. However, that will vary, depending on your use of these records, and your needs. You may be able to purge all records (for example, if you do not use any of the distribution information within Counterpoint).

Once you have selected the purge range and selected the purge option, you will get a report of the records that will be purged. Also, you will be asked for confirmation that you actually wish to purge the records. Once you confirm that you want to purge, the records will be removed. This can take several minutes, depending on how many records you are purging at a time. That is all there is to it.

Dave.

Getting ready to upgrade Counterpoint SQL

Getting ready to upgrade Counterpoint SQL

With many people getting ready to upgrade Counterpoint SQL to the latest version, I thought I would talk about what can be done to prepare. One needs to confirm that the systems have the required prerequisites, as well as clearing any data that can be.

On the systems side, the biggest requirement is that SQL Server be version 2008 R2, at the minimum. Version 2012 is also supported, but later versions are not recommended. They may work without issues, but we are not recommending using versions above 2012 at this point. If you need to upgrade your SQL Server software, you can purchase a 2016 license, and install 2012 as a downgraded install, using that license. That way, you can upgrade to 2016 under the same license should it be necessary, or desirable, at some point in the future. That is the only issue that we have been seeing as far as system requirements for upgrading Counterpoint SQL.

As to the data, purging old data will speed up the upgrade process. In particular, purging the ticket history can make a big difference. Many times purging old ticket history is something that has not been done in a long time, if ever. With many interconnected tables making up the ticket history, the number of records that even a small store can generate overs years can be amazing. During the upgrade of Counterpoint SQL, the ticket history tables will updated with new fields. On very large history tables, this can be very time consuming.

In order to lessen the impact, the upgrade process does not perform the actual ticket history table upgrades. Rather, the tables are move, and empty tables with the changes to the schema are created. These tables are then populated from the data in the old tables, by means of a scheduled job. This job will migrate a number of records each time that it runs. In most cases, it will be scheduled to run frequently, and process a fairly small number of history records, starting with the latest tickets. Thus, over time the entire data set is migrated, and the records most needed for functions such as purchasing based on sales, or replenishment, are migrated first. Then, older tickets, which are usually not needed as much, are migrated later.

By purging the ticket history prior to upgrading, the migration process will obviously complete sooner. Every company is different in what they need for history, so some time should be spent in determining how much history to preserve. Two to three years seems to be the most common, but some wish to retain more, and some less. Once the cutoff date is determined, then the history should be purged up to that date. However, if you are using multi-site, I strongly recommend purging only one or two months of data at a time, and letting those changes replicate to all sites. If several months, or years of data are purged at once, it can severely impact the replication and even bring it down.

Those are the main points to address in preparing to upgrade Counterpoint SQL. Once they are done, it is simply a matter of scheduling a time and doing the upgrade, which usually takes a few hours at each location.

Dave.

Repairing a table in MS-SQL

In my last blog, I discussed how to check a database in MS-SQL, using DBCC checkdb.  So, what happens if you check the database, and find a corrupt table.  We will look at that today.

Looking at the output from the checkdb command, will give you an idea of what needs to be done to repair a table.  The message will contain the minimum level of repair that is required.  These will be something like REPAIR_ALLOW_DATA_LOSS, or REPAIR_REBUILD.  In my experience, it is almost always REPAIR_ALLOW_DATA_LOSS.  The first thing, of course, is to make a backup of your database.  

Now, as to how to repair the table.  First, the database will have to be in single user mode.  Please note, that if you are using Counterpoint SQL, stop the Counterpoint services first, or they will almost instantly re-attach, and there goes your one use connection.  Once the database is backed up, and in single user mode, you can proceed to repair the table.

The basic command to do the repair, is:

  • DBCC checktable(‘TABLE_NAME’, REPAIR_ALLOW_DATA_LOSS)

This starts the repair routine, and can take quite a while on a large table.  Once the repair does end, it will report as to what was, or was not, repaired.  It has been my experience that it is very rare that repairs can not be made, and those cases have been where there was physical problems with the hard drive.

Now, assuming that checktable reported that it was able to repair the table, check for any missing data.  It has been my experience that the actual loss of data is rare, but since the option used makes it possible, it needs to be checked.  How this is done varies.  It may be possible to check record counts or balances, depending on just what table was involved, and what information you have that can be verified.

Once the data has been checked for data loss, put the database back into multi-user mode.  At that point, you are back up and running.

Checking Database Health

There may come a time that you get errors, indicating corruption, in to your MS-SQL database.  Do not panic, as most of the time these are recoverable.

The first place to start is using the database check utility, and that is what I am going to discuss today.

The MS-SQL database check utility is DBCC.  There are several options that can be run via DBCC, but the place to start is with the checkdb.  The checkdb function of DBCC checks several different areas of the database.  The MS-SQL documentation states that checkdb:

  • Runs DBCC CHECKALLOC on the database.
  • Runs DBCC CHECKTABLE on every table and view in the database.
  • Runs DBCC CHECKCATALOG on the database.
  • Validates the contents of every indexed view in the database.
  • Validates link-level consistency between table metadata and file system directories and files when storing varbinary(max) data in the file system using FILESTREAM.
  • Validates the Service Broker data in the database.

That is a pretty thorough checking of the database, and should uncover any issues with corruption, or database consistency.

The basic command is “DBCC checkdb(‘Database-name’)”.  So, if you have a database called “MyDatabase”, running “DBCC checkdb(‘MyDatabase’)” will do all of the checking mentioned above on your database.

I will warn you, however, that you will get a report on every table, index, and such, as to what checkdb finds.  Since we are concerned here with finding any problems, and not looking at a list of the possibly hundreds of tables that do not have any issues, let’s tell DBCC and checkdb not to report those.  This is done with the “WITH NO_INFOMSGS” clause.  So, the full command is now “DBCC checkdb(‘MyDatabase’) WITH NO_INFOMSGS”.  Now, what you will see is information for only tables that have any corruption or consistency issues.  Hopefully, you do not see anything reported, which means that no problems were detected.  If there are problems detected, then it will also report information as to what the problem is, and the minimum repair level that would be required.  Getting into doing those repairs will be the subject of a future blog.

Checking Your Database Health

There may come a time that you get errors, indicating corruption, in to your MS-SQL database.  Do not panic, as most of the time these are recoverable.

The first place to start is using the database check utility, and that is what I am going to discuss today.

The MS-SQL database check utility is DBCC.  There are several options that can be run via DBCC, but the place to start is with the checkdb.  The checkdb function of DBCC checks several different areas of the database.  The MS-SQL documentation states that checkdb:

  • Runs DBCC CHECKALLOC on the database.
  • Runs DBCC CHECKTABLE on every table and view in the database.
  • Runs DBCC CHECKCATALOG on the database.
  • Validates the contents of every indexed view in the database.
  • Validates link-level consistency between table metadata and file system directories and files when storing varbinary(max) data in the file system using FILESTREAM.
  • Validates the Service Broker data in the database.

That is a pretty thorough checking of the database, and should uncover any issues with corruption, or database consistency.

The basic command is "DBCC checkdb(‘Database-name’)".  So, if you have a database called "MyDatabase", running "DBCC checkdb(‘MyDatabase’)" will do all of the checking mentioned above on your database.

I will warn you, however, that you will get a report on every table, index, and such, as to what checkdb finds.  Since we are concerned here with finding any problems, and not looking at a list of the possibly hundreds of tables that do not have any issues, let’s tell DBCC and checkdb not to report those.  This is done with the "WITH NO_INFOMSGS" clause.  So, the full command is now "DBCC checkdb(‘MyDatabase’) WITH NO_INFOMSGS".  Now, what you will see is information for only tables that have any corruption or consistency issues.  Hopefully, you do not see anything reported, which means that no problems were detected.  If there are problems detected, then it will also report information as to what the problem is, and the minimum repair level that would be required.  Getting into doing those repairs will be the subject of a future blog.

SQL Maintenance on SQL Server Express

CCS recommends using the licensed versions of Microsoft SQL server products to get the full feature set and the most current patch level support.  In some cases, the SQL Server Express Edition (SSE) may be adequate for your needs.  As with most community version or express version software there are limitations.  Often this is a major feature like automatic task scheduling or a critical monitoring capability.  There limitation that SSE has is you cannot run the SQL Agent and schedule jobs.

To automate the SQL maintenance with SSE, you use stored procedures, combined with the Windows task scheduler.  I create stored procedures to do the maintenance tasks that I want to do, such as rebuilding indexes, updating statistics, and backing up the databases.  Personally, I prefer to do each task as a separate stored procedure.  This avoids the monolith procedure that is a nightmare to maintain.  These would be the same stored procedures that I would use for a full SQL server implementation, where I can use the SQL Server Agent to schedule jobs.

At this point, for SSE, I make one additional stored procedure.  This is one simply calls each of the maintenance procedures in turn.  Instead of creating a job with multiple steps, you simply call this procedure to perform all of the SQL maintenance tasks.

Now to be able to schedule the maintenance procedure to automatically run.  The key here, is to install the SQL command line utility (sqlcmd).  This utility allows you to execute SQL commands from the Windows command line.  It is simply a matter of using “sqlcmd” to execute the stored procedure which calls all the maintenance tasks.

The final piece to the puzzle, is to create a batch file with PowerShell or another tool that runs the sqlcmd line.  Once the batch file is done, use the Windows task manager, and scheduling the batch file to be run at the desired time.

Although this approach is easy in concept there can be several syntactical gotchas and quirks of the task scheduler to work through.   For example, Windows and SQL security requirements.  If you wish to explore the SSE alternative contact CCS support.

Dave.

SQL maintenance for Counterpoint SQL

In order to keep your Counterpoint SQL system running efficiently, there are a few maintenance tasks that need to be done.  These need to be performed on an on-going basis.

First, backing up all databases should be a top priority.  Although this does not directly improve the performance of Counterpoint SQL, you will be very glad to have these in that rare case that your SQL database gets corrupted beyond repair.  It does not happen often, but it can happen.  Much better to make database backups at least once a day.

Maintenance for Counterpoint SQL performance, two things that we have found to have the biggest impact on performance, are indexes and statistics.  What we have found, is that in order to keep Counterpoint SQL performing well, rebuilding indexes and updating statistics needs to be done frequently.  In fact, I recommend that it be done on a nightly basis.  Doing this frequently allows your SQL server to locate records in the most efficient manner.  Sometimes it is nothing short of amazing to see the difference in the speed of your lookups, just by doing these two tasks.

Something else that is often recommended, is shrinking the database.  I do not recommend this, unless a large amount of data has been purged, and you are looking to recover the space.  If you do shrink your database, make sure that you allow ample empty space.  If you shrink it, without allowing the empty space, your SQL server will incur significant overhead because it will frequently be needing to increase the database size automatically in small increments.

We have put together the SQL scripts to setup and schedule these tasks.  By scheduling these to run every night, your Counterpoint SQL performance will be kept at a high level.  Contact us to set them up on your system.

Dave.

Basic tuning for SQL Server for Counterpoint-SQL

There are a couple of tuning settings that you should change keep your Counterpoint SQL installation running efficiently.  Starting with the SQL server itself, there are a couple of settings that can help.

First, is the memory usage.  The default, is a minimum of zero, and a maximum of all memory.  It is much better to set these to specific values.  Since the operating system should have a couple of gigabytes to work with, I like to set the minimum and maximum a couple gigabytes below the installed memory.  So, say given eight gigabytes of RAM, I would set both the minimum and maximum memory to 6,144 (1,024 x 6) megabytes.  By setting both the minimum and maximum to the same value, the need for the server to try to dynamically allocate and deallocate memory is alleviated.  It can just work on processing the SQL transactions.

The second setting that I try to use, is the SQL priority boost.  If you are running a dedicated SQL server, and are not using the same computer for other processing, then definitely set the priority boost.  I will give the SQL programs a slight edge on using resources over other processes that run.  These other processes would be some of the Windows housekeeping, and background processing.

If you are using your SQL server for other processing, also, then using the SQL priority boost would need to be evaluated.  Many times it can be turned on, without significant impact on the other processing that you are doing.  Turn it on, and see how it goes.  If it does slow your other processing too much, it can be turned off again.

Dave.

The Barcode Scam

The bad guys are at it again, with a really simple, yet often successful, scam.  The scam is setup when someone scans and prints a barcode for one item, and then pastes that barcode over the barcode for a more expensive item.  At checkout, when the barcode is scanned  of course it rings in as the less expensive item.

Some scammers will take it even further.  They look for discarded receipts, with a high end product on it.  Then, they buy that high end product, using the barcode scam. Later, they remove the fake barcode and return the item using the receipt they found.  Of course during the return, since the real barcode is being used, the item is returned for the real price, and the scammer pockets the difference.

In this case, your sales people need to have at least a rough idea of what your items sell for, especially the higher priced ones.  It is not a matter of knowing all the prices, but if an item comes up for $19.99, and the sales person knows that it is actually over $100.00, that will should set the bells of.  Just being able to recognize that there is a large discrepancy is enough.  After all, the scammer is probably not going to try to scam an item that sells for 79.99, at 75.99, but more likely for something like 14.99.

This scam brings up, yet again, how important it is for your sales people to be aware.  They are, after all, your first line of defense against many types of loss.  

Dave.