Getting ready to upgrade Counterpoint SQL

Getting ready to upgrade Counterpoint SQL

With many people getting ready to upgrade Counterpoint SQL to the latest version, I thought I would talk about what can be done to prepare. One needs to confirm that the systems have the required prerequisites, as well as clearing any data that can be.

On the systems side, the biggest requirement is that SQL Server be version 2008 R2, at the minimum. Version 2012 is also supported, but later versions are not recommended. They may work without issues, but we are not recommending using versions above 2012 at this point. If you need to upgrade your SQL Server software, you can purchase a 2016 license, and install 2012 as a downgraded install, using that license. That way, you can upgrade to 2016 under the same license should it be necessary, or desirable, at some point in the future. That is the only issue that we have been seeing as far as system requirements for upgrading Counterpoint SQL.

As to the data, purging old data will speed up the upgrade process. In particular, purging the ticket history can make a big difference. Many times purging old ticket history is something that has not been done in a long time, if ever. With many interconnected tables making up the ticket history, the number of records that even a small store can generate overs years can be amazing. During the upgrade of Counterpoint SQL, the ticket history tables will updated with new fields. On very large history tables, this can be very time consuming.

In order to lessen the impact, the upgrade process does not perform the actual ticket history table upgrades. Rather, the tables are move, and empty tables with the changes to the schema are created. These tables are then populated from the data in the old tables, by means of a scheduled job. This job will migrate a number of records each time that it runs. In most cases, it will be scheduled to run frequently, and process a fairly small number of history records, starting with the latest tickets. Thus, over time the entire data set is migrated, and the records most needed for functions such as purchasing based on sales, or replenishment, are migrated first. Then, older tickets, which are usually not needed as much, are migrated later.

By purging the ticket history prior to upgrading, the migration process will obviously complete sooner. Every company is different in what they need for history, so some time should be spent in determining how much history to preserve. Two to three years seems to be the most common, but some wish to retain more, and some less. Once the cutoff date is determined, then the history should be purged up to that date. However, if you are using multi-site, I strongly recommend purging only one or two months of data at a time, and letting those changes replicate to all sites. If several months, or years of data are purged at once, it can severely impact the replication and even bring it down.

Those are the main points to address in preparing to upgrade Counterpoint SQL. Once they are done, it is simply a matter of scheduling a time and doing the upgrade, which usually takes a few hours at each location.

Dave.

Repairing a table in MS-SQL

In my last blog, I discussed how to check a database in MS-SQL, using DBCC checkdb.  So, what happens if you check the database, and find a corrupt table.  We will look at that today.

Looking at the output from the checkdb command, will give you an idea of what needs to be done to repair a table.  The message will contain the minimum level of repair that is required.  These will be something like REPAIR_ALLOW_DATA_LOSS, or REPAIR_REBUILD.  In my experience, it is almost always REPAIR_ALLOW_DATA_LOSS.  The first thing, of course, is to make a backup of your database.  

Now, as to how to repair the table.  First, the database will have to be in single user mode.  Please note, that if you are using Counterpoint SQL, stop the Counterpoint services first, or they will almost instantly re-attach, and there goes your one use connection.  Once the database is backed up, and in single user mode, you can proceed to repair the table.

The basic command to do the repair, is:

  • DBCC checktable(‘TABLE_NAME’, REPAIR_ALLOW_DATA_LOSS)

This starts the repair routine, and can take quite a while on a large table.  Once the repair does end, it will report as to what was, or was not, repaired.  It has been my experience that it is very rare that repairs can not be made, and those cases have been where there was physical problems with the hard drive.

Now, assuming that checktable reported that it was able to repair the table, check for any missing data.  It has been my experience that the actual loss of data is rare, but since the option used makes it possible, it needs to be checked.  How this is done varies.  It may be possible to check record counts or balances, depending on just what table was involved, and what information you have that can be verified.

Once the data has been checked for data loss, put the database back into multi-user mode.  At that point, you are back up and running.

Checking Database Health

There may come a time that you get errors, indicating corruption, in to your MS-SQL database.  Do not panic, as most of the time these are recoverable.

The first place to start is using the database check utility, and that is what I am going to discuss today.

The MS-SQL database check utility is DBCC.  There are several options that can be run via DBCC, but the place to start is with the checkdb.  The checkdb function of DBCC checks several different areas of the database.  The MS-SQL documentation states that checkdb:

  • Runs DBCC CHECKALLOC on the database.
  • Runs DBCC CHECKTABLE on every table and view in the database.
  • Runs DBCC CHECKCATALOG on the database.
  • Validates the contents of every indexed view in the database.
  • Validates link-level consistency between table metadata and file system directories and files when storing varbinary(max) data in the file system using FILESTREAM.
  • Validates the Service Broker data in the database.

That is a pretty thorough checking of the database, and should uncover any issues with corruption, or database consistency.

The basic command is “DBCC checkdb(‘Database-name’)”.  So, if you have a database called “MyDatabase”, running “DBCC checkdb(‘MyDatabase’)” will do all of the checking mentioned above on your database.

I will warn you, however, that you will get a report on every table, index, and such, as to what checkdb finds.  Since we are concerned here with finding any problems, and not looking at a list of the possibly hundreds of tables that do not have any issues, let’s tell DBCC and checkdb not to report those.  This is done with the “WITH NO_INFOMSGS” clause.  So, the full command is now “DBCC checkdb(‘MyDatabase’) WITH NO_INFOMSGS”.  Now, what you will see is information for only tables that have any corruption or consistency issues.  Hopefully, you do not see anything reported, which means that no problems were detected.  If there are problems detected, then it will also report information as to what the problem is, and the minimum repair level that would be required.  Getting into doing those repairs will be the subject of a future blog.

Checking Your Database Health

There may come a time that you get errors, indicating corruption, in to your MS-SQL database.  Do not panic, as most of the time these are recoverable.

The first place to start is using the database check utility, and that is what I am going to discuss today.

The MS-SQL database check utility is DBCC.  There are several options that can be run via DBCC, but the place to start is with the checkdb.  The checkdb function of DBCC checks several different areas of the database.  The MS-SQL documentation states that checkdb:

  • Runs DBCC CHECKALLOC on the database.
  • Runs DBCC CHECKTABLE on every table and view in the database.
  • Runs DBCC CHECKCATALOG on the database.
  • Validates the contents of every indexed view in the database.
  • Validates link-level consistency between table metadata and file system directories and files when storing varbinary(max) data in the file system using FILESTREAM.
  • Validates the Service Broker data in the database.

That is a pretty thorough checking of the database, and should uncover any issues with corruption, or database consistency.

The basic command is "DBCC checkdb(‘Database-name’)".  So, if you have a database called "MyDatabase", running "DBCC checkdb(‘MyDatabase’)" will do all of the checking mentioned above on your database.

I will warn you, however, that you will get a report on every table, index, and such, as to what checkdb finds.  Since we are concerned here with finding any problems, and not looking at a list of the possibly hundreds of tables that do not have any issues, let’s tell DBCC and checkdb not to report those.  This is done with the "WITH NO_INFOMSGS" clause.  So, the full command is now "DBCC checkdb(‘MyDatabase’) WITH NO_INFOMSGS".  Now, what you will see is information for only tables that have any corruption or consistency issues.  Hopefully, you do not see anything reported, which means that no problems were detected.  If there are problems detected, then it will also report information as to what the problem is, and the minimum repair level that would be required.  Getting into doing those repairs will be the subject of a future blog.

SQL Maintenance on SQL Server Express

CCS recommends using the licensed versions of Microsoft SQL server products to get the full feature set and the most current patch level support.  In some cases, the SQL Server Express Edition (SSE) may be adequate for your needs.  As with most community version or express version software there are limitations.  Often this is a major feature like automatic task scheduling or a critical monitoring capability.  There limitation that SSE has is you cannot run the SQL Agent and schedule jobs.

To automate the SQL maintenance with SSE, you use stored procedures, combined with the Windows task scheduler.  I create stored procedures to do the maintenance tasks that I want to do, such as rebuilding indexes, updating statistics, and backing up the databases.  Personally, I prefer to do each task as a separate stored procedure.  This avoids the monolith procedure that is a nightmare to maintain.  These would be the same stored procedures that I would use for a full SQL server implementation, where I can use the SQL Server Agent to schedule jobs.

At this point, for SSE, I make one additional stored procedure.  This is one simply calls each of the maintenance procedures in turn.  Instead of creating a job with multiple steps, you simply call this procedure to perform all of the SQL maintenance tasks.

Now to be able to schedule the maintenance procedure to automatically run.  The key here, is to install the SQL command line utility (sqlcmd).  This utility allows you to execute SQL commands from the Windows command line.  It is simply a matter of using “sqlcmd” to execute the stored procedure which calls all the maintenance tasks.

The final piece to the puzzle, is to create a batch file with PowerShell or another tool that runs the sqlcmd line.  Once the batch file is done, use the Windows task manager, and scheduling the batch file to be run at the desired time.

Although this approach is easy in concept there can be several syntactical gotchas and quirks of the task scheduler to work through.   For example, Windows and SQL security requirements.  If you wish to explore the SSE alternative contact CCS support.

Dave.

SQL maintenance for Counterpoint SQL

In order to keep your Counterpoint SQL system running efficiently, there are a few maintenance tasks that need to be done.  These need to be performed on an on-going basis.

First, backing up all databases should be a top priority.  Although this does not directly improve the performance of Counterpoint SQL, you will be very glad to have these in that rare case that your SQL database gets corrupted beyond repair.  It does not happen often, but it can happen.  Much better to make database backups at least once a day.

Maintenance for Counterpoint SQL performance, two things that we have found to have the biggest impact on performance, are indexes and statistics.  What we have found, is that in order to keep Counterpoint SQL performing well, rebuilding indexes and updating statistics needs to be done frequently.  In fact, I recommend that it be done on a nightly basis.  Doing this frequently allows your SQL server to locate records in the most efficient manner.  Sometimes it is nothing short of amazing to see the difference in the speed of your lookups, just by doing these two tasks.

Something else that is often recommended, is shrinking the database.  I do not recommend this, unless a large amount of data has been purged, and you are looking to recover the space.  If you do shrink your database, make sure that you allow ample empty space.  If you shrink it, without allowing the empty space, your SQL server will incur significant overhead because it will frequently be needing to increase the database size automatically in small increments.

We have put together the SQL scripts to setup and schedule these tasks.  By scheduling these to run every night, your Counterpoint SQL performance will be kept at a high level.  Contact us to set them up on your system.

Dave.

Basic tuning for SQL Server for Counterpoint-SQL

There are a couple of tuning settings that you should change keep your Counterpoint SQL installation running efficiently.  Starting with the SQL server itself, there are a couple of settings that can help.

First, is the memory usage.  The default, is a minimum of zero, and a maximum of all memory.  It is much better to set these to specific values.  Since the operating system should have a couple of gigabytes to work with, I like to set the minimum and maximum a couple gigabytes below the installed memory.  So, say given eight gigabytes of RAM, I would set both the minimum and maximum memory to 6,144 (1,024 x 6) megabytes.  By setting both the minimum and maximum to the same value, the need for the server to try to dynamically allocate and deallocate memory is alleviated.  It can just work on processing the SQL transactions.

The second setting that I try to use, is the SQL priority boost.  If you are running a dedicated SQL server, and are not using the same computer for other processing, then definitely set the priority boost.  I will give the SQL programs a slight edge on using resources over other processes that run.  These other processes would be some of the Windows housekeeping, and background processing.

If you are using your SQL server for other processing, also, then using the SQL priority boost would need to be evaluated.  Many times it can be turned on, without significant impact on the other processing that you are doing.  Turn it on, and see how it goes.  If it does slow your other processing too much, it can be turned off again.

Dave.

The Barcode Scam

The bad guys are at it again, with a really simple, yet often successful, scam.  The scam is setup when someone scans and prints a barcode for one item, and then pastes that barcode over the barcode for a more expensive item.  At checkout, when the barcode is scanned  of course it rings in as the less expensive item.

Some scammers will take it even further.  They look for discarded receipts, with a high end product on it.  Then, they buy that high end product, using the barcode scam. Later, they remove the fake barcode and return the item using the receipt they found.  Of course during the return, since the real barcode is being used, the item is returned for the real price, and the scammer pockets the difference.

In this case, your sales people need to have at least a rough idea of what your items sell for, especially the higher priced ones.  It is not a matter of knowing all the prices, but if an item comes up for $19.99, and the sales person knows that it is actually over $100.00, that will should set the bells of.  Just being able to recognize that there is a large discrepancy is enough.  After all, the scammer is probably not going to try to scam an item that sells for 79.99, at 75.99, but more likely for something like 14.99.

This scam brings up, yet again, how important it is for your sales people to be aware.  They are, after all, your first line of defense against many types of loss.  

Dave.

Renaming workstations using Offline Ticket Entry

I recently worked with a customer who had their Offline Ticket Entry (OLTE) stop updating.  When investigating the cause, I found that they had renamed the workstation that was running OLTE.  While changing the name to something more meaningful within their network structure made sense, OLTE immediately quit communicating with the server.

The ultimate reason for this, turned out to be that CP-Services is using the internal SQL server name.  Since this name was set in SQL when SQL Server installed, it was the original name of the workstation.  So, when CP-Services started the process of synchronizing with the main server, it was attempting to connect to a SQL server at the original workstation name, which of course did not exist.

In this case the resolution was simple enough, in changing the internal server name with SQL Server on the workstation.  However, this does point out how a seeming innocuous change in one area, can throw other parts of your system into a tailspin.  The best course of action, before changing anything within your network, is to fully research and evaluate the system, or systems, that you propose to change.  Identify any software that could possibly be affected, and the role that software plays in your organization.  Then, once that change is made, test all of those pieces of software to make sure they are still operating as desired.

Remember that CCS Retail Systems Technical Support is always available to help you.  You can reach us at our support department or 800-672-4806.

Dave.

Repairing SQL database corruption

With a little good luck, you will never run into having a corrupt SQL database.  With a little bad luck, it will happen.  Then the question becomes how bad is the corruption, and can it be repaired.

One option, of course, is to restore the database from a good backup (which you should always have).  The second option is to try to recover the database.  The first step, is to check the database, and determine what is needed to repair it.  This is done with the "DBCC CHECKDB" command:

     DBCC checkdb() WITH NO_INFOMSGS

The "WITH NO_INFOMSGS" greatly simplifies the output, as only the messages relating to the corruption are displayed.  In the majority of cases, this will be that a table is corrupt, and it will state what level of recovery is required.  The level of recovery is used in the actual recover command, that I will get to shortly.  In my experience, the "REPAIR_ALLOW_DATA_LOSS" is usually given as the minimum recover level that will work.  

If it is acceptable to try the recovery, with the possible loss of, usually, a small amount of data if any, then the repair can be attempted.  If it is not, then the only option at that point would be to restore from a good backup.

To attempt the data recovery, the first thing to do is to backup your database as it is.  First, the database needs to be set to single-user:

     ALTER DATABASE SET SINGLE_USER WITH ROLLBACK IMMEDIATE

Then, run the "DBCC CHECKTABLE" command, for each table that was reported as having corruption:

     DBCC checktable(,REPAIR_ALLOW_DATA_LOSS)

When the table check finishes, it will report whether or not it was able to repair the table.  If repairs were successful, then the database needs to be put back into multiuser mode:

     ALTER DATABASE SET MULTI_USER

Your database is now functional again.  The only question remaining, is what, if any, data was lost.  For this, you will have to determine how and what to check for.