oltp

Results 1 - 25 of 44Sort Results By: Published Date | Title | Company Name
Published By: Hewlett Packard Enterprise     Published Date: Aug 02, 2017
In midsize and large organizations, critical business processing continues to depend on relational databases including Microsoft® SQL Server. While new tools like Hadoop help businesses analyze oceans of Big Data, conventional relational-database management systems (RDBMS) remain the backbone for online transaction processing (OLTP), online analytic processing (OLAP), and mixed OLTP/OLAP workloads.
Tags : 
database usage, database management, server usage, data protection
    
Hewlett Packard Enterprise
Published By: Hewlett Packard Enterprise     Published Date: Aug 02, 2017
What if you could reduce the cost of running Oracle databases and improve database performance at the same time? What would it mean to your enterprise and your IT operations? Oracle databases play a critical role in many enterprises. They’re the engines that drive critical online transaction (OLTP) and online analytical (OLAP) processing applications, the lifeblood of the business. These databases also create a unique challenge for IT leaders charged with improving productivity and driving new revenue opportunities while simultaneously reducing costs.
Tags : 
cost reduction, oracle database, it operation, online transaction, online analytics
    
Hewlett Packard Enterprise
Published By: Hewlett Packard Enterprise     Published Date: Mar 26, 2018
Modern storage arrays can’t compete on price without a range of data reduction technologies that help reduce the overall total cost of ownership of external storage. Unfortunately, there is no one single data reduction technology that fits all data types and we see savings being made with both data deduplication and compression, depending on the workload. Typically, OLTP-type data (databases) work well with compression and can achieve between 2:1 and 3:1 reduction, depending on the data itself. Deduplication works well with large volumes of repeated data like virtual machines or virtual desktops, where many instances or images are based off a similar “gold” master.
Tags : 
    
Hewlett Packard Enterprise
Published By: Oracle CX     Published Date: Oct 20, 2017
With the growing size and importance of information stored in today’s databases, accessing and using the right information at the right time has become increasingly critical. Real-time access and analysis of operational data is key to making faster and better business decisions, providing enterprises with unique competitive advantages. Running analytics on operational data has been difficult because operational data is stored in row format, which is best for online transaction processing (OLTP) databases, while storing data in column format is much better for analytics processing. Therefore, companies normally have both an operational database with data in row format and a separate data warehouse with data in column format, which leads to reliance on “stale data” for business decisions. With Oracle’s Database In-Memory and Oracle servers based on the SPARC S7 and SPARC M7 processors companies can now store data in memory in both row and data formats, and run analytics on their operatio
Tags : 
    
Oracle CX
Published By: Oracle CX     Published Date: Oct 20, 2017
Databases have long served as the lifeline of the business. Therefore, it is no surprise that performance has always been top of mind. Whether it be a traditional row-formatted database to handle millions of transactions a day or a columnar database for advanced analytics to help uncover deep insights about the business, the goal is to service all requests as quickly as possible. This is especially true as organizations look to gain an edge on their competition by analyzing data from their transactional (OLTP) database to make more informed business decisions. The traditional model (see Figure 1) for doing this leverages two separate sets of resources, with an ETL being required to transfer the data from the OLTP database to a data warehouse for analysis. Two obvious problems exist with this implementation. First, I/O bottlenecks can quickly arise because the databases reside on disk and second, analysis is constantly being done on stale data. In-memory databases have helped address p
Tags : 
    
Oracle CX
Published By: Dell and Nutanix     Published Date: Jan 16, 2018
Because many SQL Server implementations are running on virtual machines already, the use of a hyperconverged appliance is a logical choice. The Dell EMC XC Series with Nutanix software delivers high performance and low Opex for both OLTP and analytical database applications. For those moving from SQL Server 2005 to SQL Server 2016, this hyperconverged solution provides particularly significant benefits.
Tags : 
data, security, add capacity, infrastructure, networking, virtualization, dell
    
Dell and Nutanix
Published By: Dell EMC     Published Date: Nov 10, 2015
Read this paper to learn how Dell has used its Generation 12 servers powered by Intel® Xeon® processors with direct attached storage to demonstrate that a system with 43% flash and intelligent tiering can perform as well as 100% flash for OLTP databases using Microsoft’s SQL server.
Tags : 
    
Dell EMC
Published By: Dell PC Lifecycle     Published Date: Mar 09, 2018
Compression algorithms reduce the number of bits needed to represent a set of data—the higher the compression ratio, the more space this particular data reduction technique saves. During our OLTP test, the Unity array achieved a compression ratio of 3.2-to-1 on the database volumes, whereas the 3PAR array averaged a 1.3-to-1 ratio. In our data mart loading test, the 3PAR achieved a ratio of 1.4-to-1 on the database volumes, whereas the Unity array got 1.3 to 1.
Tags : 
    
Dell PC Lifecycle
Published By: Dell PC Lifecycle     Published Date: Mar 09, 2018
When your company’s work demands a new storage array, you have the opportunity to invest in a solution that can support demanding workloads simultaneously—such as online transaction processing (OLTP) and data mart loading. At Principled Technologies, we compared Dell EMC™ PowerEdge™ R930 servers1 with the Intel® Xeon® Processor Dell EMC Unity 400F All Flash storage array to HPE ProLiant DL580 Gen9 servers with the HPE 3PAR 8400 array in three hands-on tests to determine how well each solution could serve a company during these database-intensive tasks. Intel Inside®. New Possibilities Outside.
Tags : 
    
Dell PC Lifecycle
Published By: Dell PC Lifecycle     Published Date: Mar 09, 2018
Les opérations de bases de données sont cruciales, tant par l’importance qu’elles revêtent pour votre entreprise que par leur ampleur même. Les tests que nous avons réalisés avec le serveur Dell EMC PowerEdge R930 et la baie de stockage Unity 400F All Flash montrent que cette solution offre des performances comparables à celles d’un serveur HPE ProLiant DL380 Gen9 et d’une baie 3PAR sur les charges applicatives OLTP, avec un taux de compression supérieur (3,2 pour 1 contre 1,3 pour 1). Pour le chargement de vastes ensembles de données, la baie Dell EMC Unity s’est montrée 22 pour cent plus rapide que la baie HPE 3PAR, ce qui peut faciliter la tâche de l’administrateur chargé des DataMarts. Lors de l’exécution simultanée des charges applicatives OLTP et DataMart, la baie Unity a surpassé la baie HPE 3PAR, avec 29 pour cent de commandes traitées en plus par minute.
Tags : 
    
Dell PC Lifecycle
Published By: Dell PC Lifecycle     Published Date: Mar 09, 2018
Les algorithmes de compression réduisent le nombre de bits nécessaires pour représenter un ensemble de données. Plus le taux de compression est élevé, plus cette technique de réduction des données permet d’économiser de l’espace. Lors de notre test OLTP, la baie Unity a atteint un taux de compression de 3,2 pour 1 sur les volumes de base de données. De son côté, la baie 3PAR affichait en moyenne un taux de 1,3 pour 1. Sur le test de chargement DataMart, la baie 3PAR a atteint un taux de 1,4 pour 1 sur les volumes de bases de données, tandis que la baie Unity enregistrait un taux de 1,3 pour 1.
Tags : 
    
Dell PC Lifecycle
Published By: IBM     Published Date: Oct 13, 2016
Compare IBM DB2 pureScale with any other offering being considered for implementing a clustered, scalable database configuration see how they deliver continuous availability and why they are important. Download now!
Tags : 
data. queries, database operations, transactional databases, clustering, it management, storage, business technology, data storage
    
IBM
Published By: IBM     Published Date: Jul 05, 2016
This white paper discusses the concept of shared data scale-out clusters, as well as how they deliver continuous availability and why they are important for delivering scalable transaction processing support.
Tags : 
ibm, always on business, cloud, big data, oltp, ibm db2 purescale, networking, knowledge management, enterprise applications, data management, business technology, data center, data science, data storage, data security
    
IBM
Published By: NetApp     Published Date: Sep 24, 2013
"Today, IT’s customers are more mobile and global than ever before and as such expect their applications and data to be available 24x7. Interruptions, whether planned or unplanned, can have a major impact to the bottom line of the business. ESG Lab tested the ability of clustered Data ONTAP to provide continuous application availability and evaluated performance for both SAN and NAS configurations while running an Oracle OLTP workload. Check out this report to see the results."
Tags : 
mobile, global, applications, cloud, configuration, technology, knowledge management, storage, data center
    
NetApp
Published By: NetApp     Published Date: Dec 09, 2014
NetApp Flash Pool is a storage cache option within the NetApp Virtual Storage Tier product family, available for NetApp FAS storage systems. A Flash Pool configures solid state drives (SSDs) and hard disk drives (HDDs) into a single storage pool, known as an “aggregate” in NetApp parlance, with the SSDs providing a fast response time cache for volumes that are provisioned on the Flash Pool aggregate.
Tags : 
netapp, hybrid, flash pool, ssd, hdd, iops, oltp, demartek, it management
    
NetApp
Published By: NetApp     Published Date: Feb 19, 2015
NetApp Flash Pool is a storage cache option within the NetApp Virtual Storage Tier product family, available for NetApp FAS storage systems. A Flash Pool configures solid state drives (SSDs) and hard disk drives (HDDs) into a single storage pool, known as an “aggregate” in NetApp parlance, with the SSDs providing a fast response time cache for volumes that are provisioned on the Flash Pool aggregate. In this lab evaluation, NetApp commissioned Demartek to evaluate the effectiveness of Flash Pool with different types and numbers of hard disk drives using an online transaction processing (OLTP) database workload, and to evaluate the performance of Flash Pool in a clustered Data ONTAP environment during a cluster storage node failover scenario. In the report, you’ll dis cover how Demartek test engineers documented a 283% gain in IOPS and a reduction in latency by a factor of 66x after incorporating NetApp Flash Pool technology.
Tags : 
    
NetApp
Published By: NetApp     Published Date: Sep 22, 2014
NetApp Flash Pool is a storage cache option within the NetApp Virtual Storage Tier product family, available for NetApp FAS storage systems. A Flash Pool configures solid state drives (SSDs) and hard disk drives (HDDs) into a single storage pool, known as an “aggregate” in NetApp parlance, with the SSDs providing a fast response time cache for volumes that are provisioned on the Flash Pool aggregate. In this lab evaluation, NetApp commissioned Demartek to evaluate the effectiveness of Flash Pool with different types and numbers of hard disk drives using an online transaction processing (OLTP) database workload, and to evaluate the performance of Flash Pool in a clustered Data ONTAP environment during a cluster storage node failover scenario. In the report, you’ll dis cover how Demartek test engineers documented a 283% gain in IOPS and a reduction in latency by a factor of 66x after incorporating NetApp Flash Pool technology.
Tags : 
flash pool, fas storage systems, ssd, online transaction processing, cluster storage
    
NetApp
Published By: Micron     Published Date: Jan 12, 2017
Micron’s 9100MAX delivers on the NVMe promise with 69% better throughput and transaction rates plus much lower latency in PostgreSQL OLTP. Download this technical marketing brief to learn more.
Tags : 
    
Micron
Published By: Micron     Published Date: Jan 12, 2017
See how Micron® NVMe SSDs and Microsoft® SQL Server reach impressive OLTP transaction rates while drastically minimizing latency and simplifying configuration. Download this technical marketing brief now.
Tags : 
    
Micron
Published By: IBM     Published Date: Jun 08, 2017
This paper presents a cost/benefit case for two leading enterprise database contenders -- IBM DB2 11.1 for Linux, UNIX, and Windows (DB2 11.1 LUW) and Oracle Database 12c -- with regard to delivering effective security capabilities, high-performance OLTP capacity and throughput, and efficient systems configuration and management automation. Comparisons are of database installations in the telecommunications, healthcare, and consumer banking industries. For OLTP workloads in these environments, three-year costs average 32 percent less for use of DB2 11.1 compared to Oracle 12c.
Tags : 
ibm, linux, windows, telecommunications, healthcare, oracle database
    
IBM
Published By: IBM     Published Date: Jul 26, 2017
This paper presents a cost/benefit case for two leading enterprise database contenders -- IBM DB2 11.1 for Linux, UNIX, and Windows (DB2 11.1 LUW) and Oracle Database 12c -- with regard to delivering effective security capabilities, high-performance OLTP capacity and throughput, and efficient systems configuration and management automation. Comparisons are of database installations in the telecommunications, healthcare, and consumer banking industries. For OLTP workloads in these environments, three-year costs average 32 percent less for use of DB2 11.1 compared to Oracle 12c.
Tags : 
ibm, enterprise data, windows, linux, telecommunications, healthcare, consumer banking
    
IBM
Published By: IBM     Published Date: Sep 28, 2017
This paper presents a cost/benefit case for two leading enterprise database contenders -- IBM DB2 11.1 for Linux, UNIX, and Windows (DB2 11.1 LUW) and Oracle Database 12c -- with regard to delivering effective security capabilities, high-performance OLTP capacity and throughput, and efficient systems configuration and management automation. Comparisons are of database installations in the telecommunications, healthcare, and consumer banking industries. For OLTP workloads in these environments, three-year costs average 32 percent less for use of DB2 11.1 compared to Oracle 12c.
Tags : 
ibm, enterprise database, oltp, telecommunications, healthcare, consumer banking
    
IBM
Published By: Group M_IBM Q1'18     Published Date: Feb 28, 2018
This paper presents a cost benefit case for IBM Db2 11.1 for LUW and Oracle Database 12c.
Tags : 
ibm, db2, oracle database, oltp deployments
    
Group M_IBM Q1'18
Published By: Oracle     Published Date: Oct 20, 2017
With the growing size and importance of information stored in today’s databases, accessing and using the right information at the right time has become increasingly critical. Real-time access and analysis of operational data is key to making faster and better business decisions, providing enterprises with unique competitive advantages. Running analytics on operational data has been difficult because operational data is stored in row format, which is best for online transaction processing (OLTP) databases, while storing data in column format is much better for analytics processing. Therefore, companies normally have both an operational database with data in row format and a separate data warehouse with data in column format, which leads to reliance on “stale data” for business decisions. With Oracle’s Database In-Memory and Oracle servers based on the SPARC S7 and SPARC M7 processors companies can now store data in memory in both row and data formats, and run analytics on their operatio
Tags : 
    
Oracle
Published By: Oracle     Published Date: Oct 20, 2017
Databases have long served as the lifeline of the business. Therefore, it is no surprise that performance has always been top of mind. Whether it be a traditional row-formatted database to handle millions of transactions a day or a columnar database for advanced analytics to help uncover deep insights about the business, the goal is to service all requests as quickly as possible. This is especially true as organizations look to gain an edge on their competition by analyzing data from their transactional (OLTP) database to make more informed business decisions. The traditional model (see Figure 1) for doing this leverages two separate sets of resources, with an ETL being required to transfer the data from the OLTP database to a data warehouse for analysis. Two obvious problems exist with this implementation. First, I/O bottlenecks can quickly arise because the databases reside on disk and second, analysis is constantly being done on stale data. In-memory databases have helped address p
Tags : 
    
Oracle
Previous   1 2    Next    
Search      

Related Topics

Add Your White Papers

Get your white papers featured in the Data Center Frontier Paper Library contact:
Kevin@DataCenterFrontier.com