How Fast is SciChart’s WPF Chart? DirectX vs. Software Comparison
Posted by Andrew on 30 December 2014 07:25 PM
Test results demonstrate that SciChart’s Direct3D10RenderSurface, available in the SciChart Enterprise and SDK Editions, is up to 18x faster than software rendering, and median speed increase is 4x faster than software. In a variety of test results and chart types SciChart’s DirectX Renderer plugin excelled in performance vs. software!
// Measuring WPF Chart Performance
WPF Chart Performance is really important to us at SciChart. We strive to have the best, and the fastest WPF Chart components in the world, but how do we demonstrate what we’ve achieved vs. our competitors to would-be customers?
We’ve created an application which measures our WPF Chart Performance in a number of scenarios, such as line, scatter, scrolling-line and many-series scenarios. The tests are designed to really stress the chart control and find it’s limits.
There there are several factors which influence overall performance of a WPF Chart Control. Such as:
These are reflected in the test cases and show stress on different areas of the renderer.
// The Test Setup
Test Setup Configuration
As you may know, SciChart ships with several RenderSurface plugins including the HighSpeedRenderSurface, HighQualityRenderSurface, and now, the Direct3D10RenderSurface, a DirectX10/11 harware-accelerated RenderSurface implementation.
We’ve tested the latest release of SciChart: v4, and compared the three renderer plugins across a number of different scenarios in order to highlight the difference in performance of the SciChart RenderSurface implementations.
What Tests are Run
A number of tests are run in our WPF Chart Performance Comparison app which stress different areas of the renderer. These tests are designed to really stress the chart, by having huge numbers of series, or points, or many updates per second.
A certain test is run, and FPS, or refresh rate, is measured via the SciChartSurface.Rendered event, which fires once after each drawing pass completes.
The test cases are as follows:
Test 1: NxM Series Test
N series of M points are appended to an XyDataSeries, then the chart redraws (the same data) as fast as possible for 20 seconds per test. FPS is measured using the time elapsed between subsequent SciChartSurface.Rendered events.
Areas Stressed: Iterating Data Series, Coordinate Transformation and Drawing.
NOTE: Resampling is not applicable for short data-series (below a few thousand points) so this tests iterating over many data-series, coordinate transformation as well as raw drawing speed.
Test 2: Scatter Series Test
N Scatter points are appended to a XyDataSeries, then the chart redraws. Immediately after, the points are updated in a Brownian motion fashion and the chart is drawn again. The FPS is measured using the time elapsed between subsequent SciChartSurface.Rendered events.
Areas Stressed: Coordinate Transformation, Geometry Generation (Ellipse) and Drawing.
NOTE: Resampling is not applicable for Scatter Series as our standard resampling algorithms do not work with scatter data, so this test effectively tests the geometry generation and drawing speed.
Test 3: FIFO Series Test
N points are appended to a FIFO (circular buffer) series, then a single point is appended (and one dropped), which triggers a redraw of the chart. The FPS is measured using the time elapsed between subsequent SciChartSurface.Rendered events.
Areas Stressed: Copying Circular Buffers (FIFO Series), Resampling and Drawing.
NOTE: Because the data is random (0-1) lots of pixels are drawn. A heavier emphasis on drawing occurs in this test than resampling, although at higher point counts resampling starts to kick in
Test 4: Append Data Test
N/3 points are appended to 3 DataSeries, then M points are appended between each draw of the chart. The data is random-walk but we vary noise to create more, or less noisy waves. This has the effect of stressing drawing more (when more noisy) vs. resampling more (when less noisy).
Areas stressed: Appending Data, Resampling, Auto-Ranging and Drawing.
NOTE: As the noise factor is increased, more pixels must be drawn, which stresses the rendering (drawing) more. For lower noise, resampling is the dominant influence on performance.
Test Setup Hardware
For SciChart’s WPF Chart, the CPU and RAM speed are the biggest influencers on overall performance. For the DirectX Renderer the GPU becomes more significant. Please note, that we get decent numbers on a dual-core laptop. You don’t need a powerful machine to get good results, but it helps (especially for DirectX).
// Test Results
We’ve included the test results below:
In Table Form
In Chart Form
No WPF Charting Performance Test could be complete without some results in charting form! So, we’ve included the above results as a set of bar charts Read more »
Site-Wide Search, powered by OneSearch™ now available
Posted by Andrew on 04 November 2014 10:40 AM
SciChart now features an awesome new site-wide search! Powered by OneSearch™
OneSearch™ uses the open source elastic search engine and proprietary web-crawlers. OneSearch™ is targeted as a private multi-site intranet or internet search engine where you need a high quality search results across multiple sites, delivered by a single, unified search and suggestions API.
We are now able to crawl multiple documentation sources to provide one search engine. The front-end is powered by Ajax and provides autocomplete suggestions as the user types.
We have set it up on the SciChart website to crawl the following web-sources
Note you can also access the site-wide search via the lightweight front end at http://search.scichart.com.
Give it a try and let us know if it helped you find what you were looking for!
The post Site-Wide Search, powered by OneSearch™ now available appeared first on SciChart.
Read more »
Manifesto for Agile Software Development
Posted by Andrew on 27 August 2014 02:25 PM
I was reading a great code-rant on Agile Software Development, the sensationally titled Why Agile has failed! and stumbled upon the Agile Software Development manifesto. What is Agile? Well, the above article demonstrates what it is not. It’s not this:
Curiously, I’ve never read the Agile Software Development manifesto before, but here it is. It’s 4 lines:
Are you doing this in your organisation? It’s certainly something we are striving to do! Sometimes successfully, sometimes unsuccessfully. Hopefully,we are getting better as time goes on.
How do we ‘do Agile’?
A few months ago we wrote an article on How we Handle Support & the Roadmap. In it we said:
When I re-read this, its closer to the Agile Software Development Manifesto (and further from Dilbert-Agile) than I first realised!
How we implement …
Individuals and Interactions
We promote direct interaction between customer and developers. In our teams we strive to communicate with each other, and share knowledge rapidly to best solve a problem.
Working Software over Comprehensive Documentation
We prefer to document by example, and iterate fast on features and improvements. By delivering fixes to you via a NuGet Nightly Build you can sometimes get the features and bug fixes you need within 24-hours! Guess what though, we don’t have a 250 page PDF user-manual, although we do have a living, growing KnowledgeBase.
Did you know some of our customers submit fixes or improvements to the source which we include in the build? We work closely with customers to ensure they get what they want and we collaborate directly with you to create SciChart.
Responding to Change
Sometimes a feature has to be backed out, sometimes a deadline has to move. Sometimes it just can’t be done. Sometimes, it all needs to be dropped, and something else needs to be done instead! We are not rigid. You have to have a plan, sure, but you have to be ready to tear it up too.
So, hopefully, we are more agile than we first thought Sure, we have a few tools and processes too. We use JetBrains YouTrack, TeamCity and ProGet to automate continuous delivery, we have tech-support Dave, what else do you need to create a successful software business?!
Read more »