Recalculating Inventory in a Counterpoint Multi-site Environment

Recalculating Inventory in a Counterpoint Multi-site Environment

If you are using Counterpoint in a multi-site environment, care must be taken when it comes to running the Inventory Recalculate Quantities procedure. This procedure checks several of the quantity fields in the inventory tables, against the data in the rest of the tables. Fields such as quantity-committed, quantity-on-po, and quantity-on-transfers, among others, are checked and corrected if necessary. Note, that it will not adjust quantity-on-hand. That must be done via inventory adjustments.

The issue in a multi-site environment relates to how replication works. When a replication session starts with a remote site, it does not take a snapshot of the database. Rather, it goes through the list of tables and makes the appropriate updates as it processes the various tables. What this means as to the inventory quantity recalculations, is that after finishing replication, all of the data is not consistent as to the point in time that it represents, due to ongoing user activity.

To illustrate, let us look at the purchasing process and inventory quantities. The purchasing tables are processed by replication in the first half of the list of tables, while the inventory tables themselves are processed nearly at the end. This process leads to the possibility that a replication session will start, and get through the purchasing tables. Then while replication processes other tables, a user at the remote site posts a new purchase order. A bit later, the replication processes the inventory tables. In this case, after the replication is finished, the hub site has no records of the purchase order that was posted at the remote site. However, it does have updated inventory records showing a quantity-on-po. In the Inventory Recalculate Quantities is run at the hub at that point, the result would be to set the quantity-on-po to zero for all items on the purchase order that was posted (assuming that they are not on any other purchase orders). When replication runs again to the remote site, the purchase order will then be transferred to the hub. Also, the quantity-on-po that was set to zero at the hub will be sent to the remote. Now both the remote and the hub have an open purchase order, while the quantity-on-po for all of the items on that purchase order will be zero.

Inventory quantities should only be recalculated after replicating with the remote sites when no activity is occurring at those sites for the entire duration of the replication, avoiding this timing problem. Typically, this means that the recalculation must be run well after the remote site closes, or before they begin processing in the morning. Remember, it is not enough that the remote site has closed, and everyone has gone home. A replication must occur after they have closed and gone home. Only then will the recalculation routine have data that truly reflects the state of the data at the remote site. Most of the time, this means that the best time to run the recalculation is in the morning before users at the remote site start doing anything with Counterpoint.

Dave.

Recovering From a Ransomware Attack

Recovering From a Ransomware Attack

Ransomware attacks are on the rise. It is getting more common to get random emails with subjects indicating they are package tracking, voice mails, photo edits, and so on. Many of these are attempts to get you to take the bait, click the link, and ultimately install ransomware. Much has been written about recognizing malicious emails, not opening mail from unknown users, and other good advice. What happens, however, if you are unfortunate and do get hit with ransomware?

We are assuming this is a true Ransomware infection, where an active payload of malware has been added to your system. Another type of Ransomware is Scareware masquerading as Ransomware. This latter Ransomware has no payload but threatens you with data encryption as well. It is best to assume any Ransomware threat includes a payload, at first. A safe mode reboot investigation can help you check if a payload is active. If the infection is just Scareware, you may be safe with a reboot and comprehensive malware scan to confirm there is no infection active.

First, be very suspicious of any unusual activity. One of the first signs, even before the ransom notice pops up, is that programs will stop working, or documents will disappear. This activity is due to the malicious software starting to encrypt your files. If anything like that happens, take immediate action. First, disconnect your computer from your network. That is, physically disconnect the network cable, or if you use a wireless connection, turn it off. Also, immediately shut down your computer. I do not usually advocate just turning off the power, but this is one time that it is not a bad idea. The idea is that if ransomware has started on your system, to limit the damage occurring.

Try to start your computer in safe mode, and begin investigating. Make sure you lookup entering safe mode in Windows on your version to MAKE SURE you do NOT get a normal boot or the Ransomware will be active again. Check for those programs or documents that suddenly disappeared. If there is a file with the same name, but the extension has changed, most likely ransomware is the culprit. In that case, be prepared to do some research, and possibly still lose some work. It depends on the active Ransomware variant since some have been Ransomware payloads have been cracked and there are recover utilities available.

Other Ransomware payloads do not have removal utilities, and you will have to go to your back copies. Before that, however, you need to make sure that the machine is cleaned of the ransomware programs or your system will be reinfected and you will need to start over again. If utilities exist to clean the Ransomeware for your system, they should be used immediately. If not, a lot of digging and experimenting will be required. If there is no cleaning utility you may need to reformat the infected drives, reinstall the operating system, and then restore from a full image backup, NOT just a file backup. In either case, spend a lot of time checking your system, before putting it back on your network and getting on with your work. You want to be very, very, sure that the Ransomware is gone, or you will be exposing the rest of the computers on your network to Ransomware infections.

Recovering from Ransomware is a critical task that can be very complex.  This blog is just a simple overview.  We recommend you contact CCS Retail Systems Support for further guidance and services to ensure the Ransomware is properly eliminated from your systems. Remember that if you comply with the Ransomware demands there is NO guarantee that your payment will result in any recovery of your system.  The best course of action is to defeat the Ransomware request NOT honor it.

Dave.

Automatically Applying Windows Updates on Servers.

Automatically Applying Windows Updates on Servers.

It is important to apply updates to your Windows server. These updates help to keep your system secure and running at peak performance.

I do not recommend automatically applying these updates on your server, though. It is much better to set Windows updates to either “notify”, or “download updates and notify”. The biggest problem with automatically installing updates is having the server reboot. This may occur automatically. Even if the time for installing updates is set to sometime in the middle of the night, when you are off the system, such a reboot may occur later, when you are on. I have seen this in cases where the updates took a long time to install. Also, however, the reboot may be delayed. If for example, a user is left logged in on the console, the reboot may be delayed until after the user is logged off. Of course, that would typically occur when you are open and using your server. Also, some updates are installed when shutting down or starting up. I have seen such updates take an hour, or more, to install. Your server is unavailable during that time.

In addition to inconvenient restarting of your server, occasionally an update may require additional attention. Although it is rare, there may be recommended setting changes, or other manual steps. It is best to know about these and be prepared to address them, instead of suddenly being confronted with a server that is not working as desired.

My recommendation, therefore, is to not automatically apply updates on your servers. Instead, make it a frequent task to check the updates that are released, and determine which should be applied. Then, after checking for any possible issues with your setup, or software, to install those updates at a time when the server may be rebooted if needed. Then check your system after they are installed, and rebooted if needed, to make sure your system and software are operating properly.

While that is my recommendation for servers, it applies to a lesser degree to your workstations, also. For workstations, you should determine if you can tolerate an unanticipated reboot, with some possible downtime. If resources permit, I recommend handling your workstations the same as your servers, and reserve updates for manual processing.

Dave.

Purging Distributions

Purging Distributions

As you use your Counterpoint SQL system, you will accumulate data. Sales history, purchasing, and receiving, all of the history is accumulated, and these tables can grow to be quite large.

One table that can grow very fast, is the distributions table. Everything that is posted will create distributions. Depending on the settings used, it can be many records for a single posting. In a short time, this can build into thousands, or millions, of records. Moreover, most people do not need most of the data in this table. If your interface to another accounting system, once the distributions are sent to that other system, one really has no need for them in Counterpoint. Also, many do not use the accounting function, either within Counterpoint, or interfaced to another accounting system. In that case, those records are never used and are just taking up space.

Fortunately, distributions are one of the areas that Counterpoint has a purge function. From the main menu, it is under System -> Accounting -> Utilities -> Purge Distributions. Under the event tab, select the Post Date range that you want to purge. You may have to use the customize function to find the post date option. Make sure that you select a range that will leave any records that you may want. In a lot of cases, leaving 30 days of distributions is appropriate. However, that will vary, depending on your use of these records, and your needs. You may be able to purge all records (for example, if you do not use any of the distribution information within Counterpoint).

Once you have selected the purge range and selected the purge option, you will get a report of the records that will be purged. Also, you will be asked for confirmation that you actually wish to purge the records. Once you confirm that you want to purge, the records will be removed. This can take several minutes, depending on how many records you are purging at a time. That is all there is to it.

Dave.

Getting ready to upgrade Counterpoint SQL

Getting ready to upgrade Counterpoint SQL

With many people getting ready to upgrade Counterpoint SQL to the latest version, I thought I would talk about what can be done to prepare. One needs to confirm that the systems have the required prerequisites, as well as clearing any data that can be.

On the systems side, the biggest requirement is that SQL Server be version 2008 R2, at the minimum. Version 2012 is also supported, but later versions are not recommended. They may work without issues, but we are not recommending using versions above 2012 at this point. If you need to upgrade your SQL Server software, you can purchase a 2016 license, and install 2012 as a downgraded install, using that license. That way, you can upgrade to 2016 under the same license should it be necessary, or desirable, at some point in the future. That is the only issue that we have been seeing as far as system requirements for upgrading Counterpoint SQL.

As to the data, purging old data will speed up the upgrade process. In particular, purging the ticket history can make a big difference. Many times purging old ticket history is something that has not been done in a long time, if ever. With many interconnected tables making up the ticket history, the number of records that even a small store can generate overs years can be amazing. During the upgrade of Counterpoint SQL, the ticket history tables will updated with new fields. On very large history tables, this can be very time consuming.

In order to lessen the impact, the upgrade process does not perform the actual ticket history table upgrades. Rather, the tables are move, and empty tables with the changes to the schema are created. These tables are then populated from the data in the old tables, by means of a scheduled job. This job will migrate a number of records each time that it runs. In most cases, it will be scheduled to run frequently, and process a fairly small number of history records, starting with the latest tickets. Thus, over time the entire data set is migrated, and the records most needed for functions such as purchasing based on sales, or replenishment, are migrated first. Then, older tickets, which are usually not needed as much, are migrated later.

By purging the ticket history prior to upgrading, the migration process will obviously complete sooner. Every company is different in what they need for history, so some time should be spent in determining how much history to preserve. Two to three years seems to be the most common, but some wish to retain more, and some less. Once the cutoff date is determined, then the history should be purged up to that date. However, if you are using multi-site, I strongly recommend purging only one or two months of data at a time, and letting those changes replicate to all sites. If several months, or years of data are purged at once, it can severely impact the replication and even bring it down.

Those are the main points to address in preparing to upgrade Counterpoint SQL. Once they are done, it is simply a matter of scheduling a time and doing the upgrade, which usually takes a few hours at each location.

Dave.

Repairing a table in MS-SQL

In my last blog, I discussed how to check a database in MS-SQL, using DBCC checkdb.  So, what happens if you check the database, and find a corrupt table.  We will look at that today.

Looking at the output from the checkdb command, will give you an idea of what needs to be done to repair a table.  The message will contain the minimum level of repair that is required.  These will be something like REPAIR_ALLOW_DATA_LOSS, or REPAIR_REBUILD.  In my experience, it is almost always REPAIR_ALLOW_DATA_LOSS.  The first thing, of course, is to make a backup of your database.  

Now, as to how to repair the table.  First, the database will have to be in single user mode.  Please note, that if you are using Counterpoint SQL, stop the Counterpoint services first, or they will almost instantly re-attach, and there goes your one use connection.  Once the database is backed up, and in single user mode, you can proceed to repair the table.

The basic command to do the repair, is:

  • DBCC checktable(‘TABLE_NAME’, REPAIR_ALLOW_DATA_LOSS)

This starts the repair routine, and can take quite a while on a large table.  Once the repair does end, it will report as to what was, or was not, repaired.  It has been my experience that it is very rare that repairs can not be made, and those cases have been where there was physical problems with the hard drive.

Now, assuming that checktable reported that it was able to repair the table, check for any missing data.  It has been my experience that the actual loss of data is rare, but since the option used makes it possible, it needs to be checked.  How this is done varies.  It may be possible to check record counts or balances, depending on just what table was involved, and what information you have that can be verified.

Once the data has been checked for data loss, put the database back into multi-user mode.  At that point, you are back up and running.

Checking Database Health

There may come a time that you get errors, indicating corruption, in to your MS-SQL database.  Do not panic, as most of the time these are recoverable.

The first place to start is using the database check utility, and that is what I am going to discuss today.

The MS-SQL database check utility is DBCC.  There are several options that can be run via DBCC, but the place to start is with the checkdb.  The checkdb function of DBCC checks several different areas of the database.  The MS-SQL documentation states that checkdb:

  • Runs DBCC CHECKALLOC on the database.
  • Runs DBCC CHECKTABLE on every table and view in the database.
  • Runs DBCC CHECKCATALOG on the database.
  • Validates the contents of every indexed view in the database.
  • Validates link-level consistency between table metadata and file system directories and files when storing varbinary(max) data in the file system using FILESTREAM.
  • Validates the Service Broker data in the database.

That is a pretty thorough checking of the database, and should uncover any issues with corruption, or database consistency.

The basic command is “DBCC checkdb(‘Database-name’)”.  So, if you have a database called “MyDatabase”, running “DBCC checkdb(‘MyDatabase’)” will do all of the checking mentioned above on your database.

I will warn you, however, that you will get a report on every table, index, and such, as to what checkdb finds.  Since we are concerned here with finding any problems, and not looking at a list of the possibly hundreds of tables that do not have any issues, let’s tell DBCC and checkdb not to report those.  This is done with the “WITH NO_INFOMSGS” clause.  So, the full command is now “DBCC checkdb(‘MyDatabase’) WITH NO_INFOMSGS”.  Now, what you will see is information for only tables that have any corruption or consistency issues.  Hopefully, you do not see anything reported, which means that no problems were detected.  If there are problems detected, then it will also report information as to what the problem is, and the minimum repair level that would be required.  Getting into doing those repairs will be the subject of a future blog.

Checking Your Database Health

There may come a time that you get errors, indicating corruption, in to your MS-SQL database.  Do not panic, as most of the time these are recoverable.

The first place to start is using the database check utility, and that is what I am going to discuss today.

The MS-SQL database check utility is DBCC.  There are several options that can be run via DBCC, but the place to start is with the checkdb.  The checkdb function of DBCC checks several different areas of the database.  The MS-SQL documentation states that checkdb:

  • Runs DBCC CHECKALLOC on the database.
  • Runs DBCC CHECKTABLE on every table and view in the database.
  • Runs DBCC CHECKCATALOG on the database.
  • Validates the contents of every indexed view in the database.
  • Validates link-level consistency between table metadata and file system directories and files when storing varbinary(max) data in the file system using FILESTREAM.
  • Validates the Service Broker data in the database.

That is a pretty thorough checking of the database, and should uncover any issues with corruption, or database consistency.

The basic command is "DBCC checkdb(‘Database-name’)".  So, if you have a database called "MyDatabase", running "DBCC checkdb(‘MyDatabase’)" will do all of the checking mentioned above on your database.

I will warn you, however, that you will get a report on every table, index, and such, as to what checkdb finds.  Since we are concerned here with finding any problems, and not looking at a list of the possibly hundreds of tables that do not have any issues, let’s tell DBCC and checkdb not to report those.  This is done with the "WITH NO_INFOMSGS" clause.  So, the full command is now "DBCC checkdb(‘MyDatabase’) WITH NO_INFOMSGS".  Now, what you will see is information for only tables that have any corruption or consistency issues.  Hopefully, you do not see anything reported, which means that no problems were detected.  If there are problems detected, then it will also report information as to what the problem is, and the minimum repair level that would be required.  Getting into doing those repairs will be the subject of a future blog.

SQL Maintenance on SQL Server Express

CCS recommends using the licensed versions of Microsoft SQL server products to get the full feature set and the most current patch level support.  In some cases, the SQL Server Express Edition (SSE) may be adequate for your needs.  As with most community version or express version software there are limitations.  Often this is a major feature like automatic task scheduling or a critical monitoring capability.  There limitation that SSE has is you cannot run the SQL Agent and schedule jobs.

To automate the SQL maintenance with SSE, you use stored procedures, combined with the Windows task scheduler.  I create stored procedures to do the maintenance tasks that I want to do, such as rebuilding indexes, updating statistics, and backing up the databases.  Personally, I prefer to do each task as a separate stored procedure.  This avoids the monolith procedure that is a nightmare to maintain.  These would be the same stored procedures that I would use for a full SQL server implementation, where I can use the SQL Server Agent to schedule jobs.

At this point, for SSE, I make one additional stored procedure.  This is one simply calls each of the maintenance procedures in turn.  Instead of creating a job with multiple steps, you simply call this procedure to perform all of the SQL maintenance tasks.

Now to be able to schedule the maintenance procedure to automatically run.  The key here, is to install the SQL command line utility (sqlcmd).  This utility allows you to execute SQL commands from the Windows command line.  It is simply a matter of using “sqlcmd” to execute the stored procedure which calls all the maintenance tasks.

The final piece to the puzzle, is to create a batch file with PowerShell or another tool that runs the sqlcmd line.  Once the batch file is done, use the Windows task manager, and scheduling the batch file to be run at the desired time.

Although this approach is easy in concept there can be several syntactical gotchas and quirks of the task scheduler to work through.   For example, Windows and SQL security requirements.  If you wish to explore the SSE alternative contact CCS support.

Dave.

SQL maintenance for Counterpoint SQL

In order to keep your Counterpoint SQL system running efficiently, there are a few maintenance tasks that need to be done.  These need to be performed on an on-going basis.

First, backing up all databases should be a top priority.  Although this does not directly improve the performance of Counterpoint SQL, you will be very glad to have these in that rare case that your SQL database gets corrupted beyond repair.  It does not happen often, but it can happen.  Much better to make database backups at least once a day.

Maintenance for Counterpoint SQL performance, two things that we have found to have the biggest impact on performance, are indexes and statistics.  What we have found, is that in order to keep Counterpoint SQL performing well, rebuilding indexes and updating statistics needs to be done frequently.  In fact, I recommend that it be done on a nightly basis.  Doing this frequently allows your SQL server to locate records in the most efficient manner.  Sometimes it is nothing short of amazing to see the difference in the speed of your lookups, just by doing these two tasks.

Something else that is often recommended, is shrinking the database.  I do not recommend this, unless a large amount of data has been purged, and you are looking to recover the space.  If you do shrink your database, make sure that you allow ample empty space.  If you shrink it, without allowing the empty space, your SQL server will incur significant overhead because it will frequently be needing to increase the database size automatically in small increments.

We have put together the SQL scripts to setup and schedule these tasks.  By scheduling these to run every night, your Counterpoint SQL performance will be kept at a high level.  Contact us to set them up on your system.

Dave.