While I don't have specific knowledge on this or access to Sage's benchmarks, I think this statement is likely only true in the case of systems with just a moderate rate of transactions. It's pretty hard to beat MSSQL for intensive database operations where constant transactions are hammering the system. SQL also has so many options to fine tune for a given application, excellent backup options, and management, unlike ProvideX. It's also hard to beat the ease of interfacing other applications to MAS, such as web access or use of MAS data within MS Access or other database applications. The ProvideX OBDC system is a total disaster compared to SQL. Lastly, total system size scalability in SQL is vastly superior to ProvideX. While few will reach the limits, ProvideX file size is limited to 256 GB but in order to do so, the files must be segmented into 2GB sub-files with much extra system overhead, whereas SQL can handle national Social Security Administration sized files of up to 16 Terabytes! I'd welcome hearing from anyone who has better, more specific information about this aspect, but again, few of us will ever reach this level of transaction data rate. In the case of this customer's issue, beyond the suggested memory upgrade, I'd also look seriously at the speed of both the processor (hopefully multiple) and the method of RAID used to optimize disk transfer. RAID 1 or 1+0 would be desirable to maximize read performance. We have nothing like that number of users but I don't consider our server with dual quad core Xeon processors running on a 3.4 MHz motherboard, with 6 Gbit/s drives in RAID 1+0 to be overkill at all. There are no hiccups in this system even when backups (to another server) are in process. I'm also assuming that most systems today are configured with standard Gigabit infrastructure, with quality NIC's and switches.
↧