New in the Community? Get started here

Schneider Electric Exchange Community

Discuss and solve problems in energy management and automation. Join conversations and share insights on products and solutions. Co-innovate and collaborate with a global network of peers.

Register Now
Geo SCADA Expert Forum
cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Highlighted
Admiral

[Imported] Optimization

>>Message imported from previous forum - Category:General Items<<
User: andrew, originally posted: 2018-10-23 20:37:40 Id:181
Pretty generic here, I know, but can anyone give me some pointers on optimizing my scada environment. Currently have a hot-standby setup with just over 12,000 points and 850 logic items. is there anything I should look for that might be telling?
Thanks

1 REPLY 1
Admiral

Re: [Imported] Optimization

>>Responses imported from previous forum


Reply From User: andrew, posted: 2018-10-26 13:10:58
Actually there are 150 of the tank logic programs, they store information to a data grid then when finished create a crystal report. i disabled the logic yesterday afternoon, attached is the current total time for all logic in service as well as overruns. Another question i have is about calc points, we have 2874 calculation points. Do these have the same effect as logic (overruns, execution times)
![]((see attachments below) m3/y6buepjlc428.jpg "")

 

Attached file: (editor/m3/y6buepjlc428.jpg), Capture.JPG File size: 787456


Reply From User: adamwoodland, posted: 2018-10-23 23:59:32
Hmm, yes, pretty open question really, but some pointers:

* Enable SBDAT logging and check the sync times every five seconds, should be very low, especially on a system your size, certainly under a second assuming both servers on a LAN
* Enable extended database lock diagnostics in Server Status - Database - Read/Write Lock Diagnostics, check to see any write locks that are over half a second (ignoring the "DB Summary STD" if it appears, which is just the once-off database startup)
* Check Server Status - Historic - Historian and ensure the flush data is under control, it should jump down to 0 each minute ideally (due to update rates you won't actually see 0, just you should see the jump downwards)
* In ViewX check Queries - Logic Execution Status, and check for any query that takes a long time to read its inputs, or execute. Long is subjective but really anything over 100mS should be something to check out

The key performance metric however is system usability which is pretty difficult to measure without subjectiveness, can users get the data/show mimics in the time frame they require? For some users that's


Reply From User: JesseChamberlain, posted: 2018-10-24 00:17:42
And feel free to send a set of logs to your friendly support team.


Reply From User: du5tin, posted: 2018-10-24 05:15:27
Hey andrew,

What makes you think your setup might not be optimized? That size of system should run pretty good on most modern hardware (laptops, small servers, virtual machines).

The configuration defaults are pretty good for most scenarios.

+1 for JesseChamberlain and his suggestion.

If you have 850 logic programs I would be looking into the Logic status like adamwoodland suggests. Seems to be this is where most systems have issues. The server side logic runs single threaded and is very dependent on the database lock timing. The less server side logic the better. If you have to run logic, run it on input change (where possible) instead of on a schedule. Avoid running queries or configuration functions from inside server side logic. Never run larger logic programs as fast as the default 1 second interval.

When troubleshooting look for overruns, long scan times, etc. Logic programs that call other logic programs can also cause issues. If you can SUM the scantimes in the query that kind of gives a 'worst case scenario'. If its more than 45 seconds you might have issues with the logic thread not being able to keep up.

We setup a 250,000 point system this spring. Zero logic programs, very strict and consistent database structure. Runs like a dream, easily the most stable system I have ever worked on. We had it running no issues on a VM with 12GB RAM and 4 vCPUs. We bumped it up to better spec (24GB RAM and 8vCPUs) when we went to production but we could hardly tell the difference in performance. Storage was some crazy SSD vSAN storage so disk access was quick, I think this makes an incredible difference.

One thing I've noticed in the last 6-8 months is ClearSCADA seems to really like a fast CPU. If you can run it on a higher clock speed (3+GHz vs 1.8-2.4GHz) you will notice a good bump in performance and snappiness.


Reply From User: tfranklin, posted: 2018-10-24 13:28:01
[at]du5tin said:
Hey andrew,

What makes you think your setup might not be optimized? That size of system should run pretty good on most modern hardware (laptops, small servers, virtual machines).

The configuration defaults are pretty good for most scenarios.

+1 for JesseChamberlain and his suggestion.

If you have 850 logic programs I would be looking into the Logic status like adamwoodland suggests. Seems to be this is where most systems have issues. The server side logic runs single threaded and is very dependent on the database lock timing. The less server side logic the better. If you have to run logic, run it on input change (where possible) instead of on a schedule. Avoid running queries or configuration functions from inside server side logic. Never run larger logic programs as fast as the default 1 second interval.

When troubleshooting look for overruns, long scan times, etc. Logic programs that call other logic programs can also cause issues. If you can SUM the scantimes in the query that kind of gives a 'worst case scenario'. If its more than 45 seconds you might have issues with the logic thread not being able to keep up.

We setup a 250,000 point system this spring. Zero logic programs, very strict and consistent database structure. Runs like a dream, easily the most stable system I have ever worked on. We had it running no issues on a VM with 12GB RAM and 4 vCPUs. We bumped it up to better spec (24GB RAM and 8vCPUs) when we went to production but we could hardly tell the difference in performance. Storage was some crazy SSD vSAN storage so disk access was quick, I think this makes an incredible difference.

One thing I've noticed in the last 6-8 months is ClearSCADA seems to really like a fast CPU. If you can run it on a higher clock speed (3+GHz vs 1.8-2.4GHz) you will notice a good bump in performance and snappiness.

Curious -- on the no logic system, was this grassroots or pre-existing? A common scenario we run into often is people feel the need to do complex calculations, bitwise evaluations, or things a PLC/RTU should be doing within ST logic. I assume no logic means you're using calculation points where applicable?


Reply From User: du5tin, posted: 2018-10-24 13:45:51
It was a rebuild of an existing CS system that had thousands of logic programs and it ran terribly prior to being rebuilt.

We used calculation points for any math calculations and other integer to string enumerations that had to be done at the host.


Reply From User: andrew, posted: 2018-10-24 13:47:29
What is SBDAT?
i ask about optimization because i set up a server state calc point
(IIF( "..Standby0.Sync" = True, 1, 0 )) that alarms several times an hour.
Both servers are current VMs with 12gb and 2 cores at 2.53ghz
![]((see attachments below) lq/wls7bto4kn4x.jpg "")
![]((see attachments below) kx/oi7hs9yuxab1.jpg "")

 


Attached file: (editor/lq/wls7bto4kn4x.jpg), Locks.JPG File size: 359246

Attached file: (editor/kx/oi7hs9yuxab1.jpg), Overruns.JPG File size: 560862


Reply From User: du5tin, posted: 2018-10-24 14:20:49
You can enable logging options in Server Status under General-Logging. SBDAT is the logging option for Standby transfer data.

Are you also noticing the server icon on the standby changing color or changing state? If your standby is having trouble staying in sync it might be because of the logic programs you have running (guessing because of the high write lock time and the overruns/execution duration of the logic programs). We have had this in the past on a couple systems and the problem presents in the standby being bumped momentarily and then coming back into sync. Changing the database config to remove the need for logic (using DataSets/DataRows instead of DataGrids) usually gets things running properly again.

You should probably get a hold of tech support and they can walk you through finding the root causes for the performance issues and suggest some improvements.


Reply From User: adamwoodland, posted: 2018-10-24 23:36:35
Those logic write times will be impacting performance as they need a database lock. Have a quick read of http://resourcecentre.controlmicrosystems.com/display/CS/Logic+Execution

Basically when outputs from logic are written to the database it requires a write lock, and this blocks virtually all other system access.

Depending on how often those ~20 tank logic programs run, and if they are all running at the same time, you could be looking at a database write lock of 1 minute.

If they run every 10 mins for example, that's your system frozen for at least 10% of its time.

What is that logic doing?

With an execution time that high suggests a serious loop (or loops within loop). A write duration that high suggests lots of rows being written to maybe a data table.

Is there any other logic with a high read or write duration, if you sort by total duration and post the top 30 or so


Reply From User: andrew, posted: 2018-10-30 23:51:01
Okay, i disabled the 'report logic' life seems so much simpler now, no more over runs. Still have the question about the difference between calc points and logic but also a query question. How do i get a column that increments by one every row? Thanks


Reply From User: andrewscott, posted: 2018-11-27 10:07:41
Storing information in a data grid is not advisable as every update will be reconfiguring the data grid object. **Data tables** are specificailly designed for storing information, where as **data grids** (and **data sets**) are designed to collate existing information so that it can be quiered easily (e.g. from a Crystal Report).