We're looking to deploy a Virtual ViewX solution for a customer to support 25 concurrent client connections. The GeoSCADA installation doco recommends (for 20 clients) host with CPU passmark=15000, 2D Graphics passmark 300, 10 Cores, 32GB RAM, 1GB network. I'm interested in whether anyone has any real life experience deploying a VVX server for this order of clients and can offer any insight or pit-falls to watch out for when spec'ing the server. What have you used for the host - physical or virtual, if virtual with dedicated graphics resources, how successful was this in practice? I'm really just wanting to draw on others experience to avoid known issues and propose best solution for the customer, as I haven'y personally deployed the VVX solution before... Thanks!
Solved! Go to Solution.
I haven't tried as high as 25, max so far is 10 using 8 cores/16GB of RAM. So far no complaints have made it my way, yet, but I'm not the active implementer or user of the system.
Some risk might be able to be managed by using a load balancer (e.g. cloud load balancer, web app firewall, or even just a small nginx on Linux) in-front of multiple servers to manage load. Haven't tried this so there may be some gotchas, but on paper it should/could work. It does use websockets so that might cause some problems as it did with OWASP rules in the WAF when I tried those.
Yesterday I saw 50 (yes, 50!) VVX web clients log in to an Azure F16v2 Server - 16 core CPU with SSD and 32Gb RAM.
Performance was tight, and complex mimics would stress the server a lot, but we had each client up with an alarm banner and new mimic changes up every 5 seconds. An F32 server with more cores would handle it better.
Note: to get this up and running with this number of clients, you must have a separate VVX server from Geo SCADA, and change the settings for sessions. Go to the VVX Manager applet. pick the Sessions tab. At the bottom choose "Multiple browser per session" and set a session count of 5. (All 50 users are still kept separate, but this allocates server resources to enable this sort of scalability).
Thanks Steve - that is reassuring to hear!
That is great to hear.
Was this with the default ViewX logging still enabled?
In some recent testing we've found the ViewX logging to be quite the bottle neck in general ViewX performance.
It would be great if you had the opportunity to disable logging on this server to just compare the performance.
The NVMe RAID SSDs that the F16v2 (and F32) use should obviously mitigate many of the disk 'media' read/write performance issues, but I'm running an NVMe SSD setup on my computer also, and just the disk subsystem impact (i.e. all the code executed to get a byte from the application to the disk) seemed to have a significant negative impact on performance.
Discuss challenges and get support in energy and automation with 30,000+ experts and peers.
Over 10,000+ support articles are available to help you find answers to your product and business challenges.
Find peer based solutions to your questions. Provide answers for fellow community members!