New in the Community? Get started here

Schneider Electric Exchange Community

Discuss and solve problems in energy management and automation. Join conversations and share insights on products and solutions. Co-innovate and collaborate with a global network of peers.

Register Now
Geo SCADA Expert Forum
Showing results for 
Search instead for 
Did you mean: 
Lt. Commander

[import] Trend (Fetching Data)

>>Message imported from previous forum - Category:ClearSCADA Software<<
User: amanshow, originally posted: 2019-05-07 02:08:11 Id:424
Good day!

I have several trends that are grouped according to process, power meter parameters and vfd parameters. All in all my system has probably around 240 tags that have historic trending. When I put them on display thru a button, it usually takes them several minutes to fetch data (I'm using DNP3 protocol by the way). It takes around 3-5 minutes and it bothers my clients. I usually tell them it's probably because Clear is communicating with scadapack for the data but is there any other way to make the trend load or fetch faster?

I would appreciate advice or responses. Have a nice day!

Reply User: dmercer, posted: 2019-05-07 02:53:49
You may have too many historic values for some of the tags for the period that you're trying to display and your system speed. It shouldn't need to fetch new data from the RTU each time to display a historic trend.
If you can let me know how many historic values are being stored per tag per unit time, and what period the trends cover by default, I should be able to tell you more.

Reply User: GregYYC, posted: 2019-05-09 13:43:08
SCADA servers are designed for polling and the are not great historians (generalized statement obviously). The issue you are likely having is your database is too busy inserting new data (which puts locks on tables/rows) to perform the fetch (which is a lower priority). To get around this, you have a couple of options.

1. Look at purchasing a "performance server" for client interaction. This would allow you to have a dedicated server to perform fetch and a dedicated servers to perform polling.
2. You could look at offloading the data to a proper historian (Wonderware, PI, Azure).
3. It comes does to I/O and your hard drives will be the limiting factor. Consider upgrading to SSD's or increase random I/O reads/writes.

Cost vs value.... 3, 1, 2

Reply User: du5tin, posted: 2019-05-09 19:55:49
+1 for GregYYC's recommendations

240 tags and 3-5 minutes to load a trend? You must have a very large amount of data, or a long time span.

Moving the database to a fast disk (like SSD) would be one of my first steps. All our systems are on fast RAID or SSD or fast SAN or SAN with RAM cache and we can usually load a lot of data fairly quickly. We did some work on a demo system running a single 7200rpm HDD a few weeks ago and it was _painful_! Anecdotally we have often seen our developer laptops with a single SSD load trends faster than server hardware with spinning disks and fast RAID arrays. System does play a factor though.

You could try increasing the RAM cache size for the Raw Historic (Accessed via the server configuration option from the server icon). The initial trend load won't be much faster but subsequent trend loads should be. The newer versions default to a higher cache of 256 MB I think. Older versions started at 50MB. Mileage may vary... if your machine doesn't have the available RAM this will reduce performance. Check with tech support if you are not sure what to set it to.

Processed Trends seem to load quicker for long time spans than Raw Trends. Raw is good for time spans less than a day. Processed is far better for time spans of a week or longer (IMO). Maybe tweaking the pen type will help.

If this is a resource constrained system with lots of users and no video card, consider turning off the gradual shading animations. This is the animation that happens when you select a pen and other pens fade out. I cannot remember exactly where this option is but if the machine has to do less graphical work sometimes it feels more responsive.