oltp

Results 1 - 25 of 32Sort Results By: Published Date | Title | Company Name
Published By: Oracle Hardware     Published Date: Oct 20, 2017
With the growing size and importance of information stored in today’s databases, accessing and using the right information at the right time has become increasingly critical. Real-time access and analysis of operational data is key to making faster and better business decisions, providing enterprises with unique competitive advantages. Running analytics on operational data has been difficult because operational data is stored in row format, which is best for online transaction processing (OLTP) databases, while storing data in column format is much better for analytics processing. Therefore, companies normally have both an operational database with data in row format and a separate data warehouse with data in column format, which leads to reliance on “stale data” for business decisions. With Oracle’s Database In-Memory and Oracle servers based on the SPARC S7 and SPARC M7 processors companies can now store data in memory in both row and data formats, and run analytics on their operatio
Tags : 
    
Oracle Hardware
Published By: Oracle Hardware     Published Date: Oct 20, 2017
Databases have long served as the lifeline of the business. Therefore, it is no surprise that performance has always been top of mind. Whether it be a traditional row-formatted database to handle millions of transactions a day or a columnar database for advanced analytics to help uncover deep insights about the business, the goal is to service all requests as quickly as possible. This is especially true as organizations look to gain an edge on their competition by analyzing data from their transactional (OLTP) database to make more informed business decisions. The traditional model (see Figure 1) for doing this leverages two separate sets of resources, with an ETL being required to transfer the data from the OLTP database to a data warehouse for analysis. Two obvious problems exist with this implementation. First, I/O bottlenecks can quickly arise because the databases reside on disk and second, analysis is constantly being done on stale data. In-memory databases have helped address p
Tags : 
    
Oracle Hardware
Published By: Hewlett Packard Enterprise     Published Date: Aug 02, 2017
In midsize and large organizations, critical business processing continues to depend on relational databases including Microsoft® SQL Server. While new tools like Hadoop help businesses analyze oceans of Big Data, conventional relational-database management systems (RDBMS) remain the backbone for online transaction processing (OLTP), online analytic processing (OLAP), and mixed OLTP/OLAP workloads.
Tags : 
database usage, database management, server usage, data protection
    
Hewlett Packard Enterprise
Published By: Hewlett Packard Enterprise     Published Date: Aug 02, 2017
What if you could reduce the cost of running Oracle databases and improve database performance at the same time? What would it mean to your enterprise and your IT operations? Oracle databases play a critical role in many enterprises. They’re the engines that drive critical online transaction (OLTP) and online analytical (OLAP) processing applications, the lifeblood of the business. These databases also create a unique challenge for IT leaders charged with improving productivity and driving new revenue opportunities while simultaneously reducing costs.
Tags : 
cost reduction, oracle database, it operation, online transaction, online analytics
    
Hewlett Packard Enterprise
Published By: Oracle CX     Published Date: Oct 20, 2017
With the growing size and importance of information stored in today’s databases, accessing and using the right information at the right time has become increasingly critical. Real-time access and analysis of operational data is key to making faster and better business decisions, providing enterprises with unique competitive advantages. Running analytics on operational data has been difficult because operational data is stored in row format, which is best for online transaction processing (OLTP) databases, while storing data in column format is much better for analytics processing. Therefore, companies normally have both an operational database with data in row format and a separate data warehouse with data in column format, which leads to reliance on “stale data” for business decisions. With Oracle’s Database In-Memory and Oracle servers based on the SPARC S7 and SPARC M7 processors companies can now store data in memory in both row and data formats, and run analytics on their operatio
Tags : 
    
Oracle CX
Published By: Oracle CX     Published Date: Oct 20, 2017
Databases have long served as the lifeline of the business. Therefore, it is no surprise that performance has always been top of mind. Whether it be a traditional row-formatted database to handle millions of transactions a day or a columnar database for advanced analytics to help uncover deep insights about the business, the goal is to service all requests as quickly as possible. This is especially true as organizations look to gain an edge on their competition by analyzing data from their transactional (OLTP) database to make more informed business decisions. The traditional model (see Figure 1) for doing this leverages two separate sets of resources, with an ETL being required to transfer the data from the OLTP database to a data warehouse for analysis. Two obvious problems exist with this implementation. First, I/O bottlenecks can quickly arise because the databases reside on disk and second, analysis is constantly being done on stale data. In-memory databases have helped address p
Tags : 
    
Oracle CX
Published By: Dell EMC     Published Date: Nov 10, 2015
Read this paper to learn how Dell has used its Generation 12 servers powered by Intel® Xeon® processors with direct attached storage to demonstrate that a system with 43% flash and intelligent tiering can perform as well as 100% flash for OLTP databases using Microsoft’s SQL server.
Tags : 
    
Dell EMC
Published By: IBM     Published Date: Oct 13, 2016
Compare IBM DB2 pureScale with any other offering being considered for implementing a clustered, scalable database configuration see how they deliver continuous availability and why they are important. Download now!
Tags : 
data. queries, database operations, transactional databases, clustering, it management, storage, business technology, data storage
    
IBM
Published By: IBM     Published Date: Jul 05, 2016
This white paper discusses the concept of shared data scale-out clusters, as well as how they deliver continuous availability and why they are important for delivering scalable transaction processing support.
Tags : 
ibm, always on business, cloud, big data, oltp, ibm db2 purescale, networking, knowledge management
    
IBM
Published By: NetApp     Published Date: Sep 24, 2013
"Today, IT’s customers are more mobile and global than ever before and as such expect their applications and data to be available 24x7. Interruptions, whether planned or unplanned, can have a major impact to the bottom line of the business. ESG Lab tested the ability of clustered Data ONTAP to provide continuous application availability and evaluated performance for both SAN and NAS configurations while running an Oracle OLTP workload. Check out this report to see the results."
Tags : 
mobile, global, applications, cloud, configuration, technology, knowledge management, storage
    
NetApp
Published By: NetApp     Published Date: Dec 09, 2014
NetApp Flash Pool is a storage cache option within the NetApp Virtual Storage Tier product family, available for NetApp FAS storage systems. A Flash Pool configures solid state drives (SSDs) and hard disk drives (HDDs) into a single storage pool, known as an “aggregate” in NetApp parlance, with the SSDs providing a fast response time cache for volumes that are provisioned on the Flash Pool aggregate.
Tags : 
netapp, hybrid, flash pool, ssd, hdd, iops, oltp, demartek
    
NetApp
Published By: NetApp     Published Date: Feb 19, 2015
NetApp Flash Pool is a storage cache option within the NetApp Virtual Storage Tier product family, available for NetApp FAS storage systems. A Flash Pool configures solid state drives (SSDs) and hard disk drives (HDDs) into a single storage pool, known as an “aggregate” in NetApp parlance, with the SSDs providing a fast response time cache for volumes that are provisioned on the Flash Pool aggregate. In this lab evaluation, NetApp commissioned Demartek to evaluate the effectiveness of Flash Pool with different types and numbers of hard disk drives using an online transaction processing (OLTP) database workload, and to evaluate the performance of Flash Pool in a clustered Data ONTAP environment during a cluster storage node failover scenario. In the report, you’ll dis cover how Demartek test engineers documented a 283% gain in IOPS and a reduction in latency by a factor of 66x after incorporating NetApp Flash Pool technology.
Tags : 
    
NetApp
Published By: NetApp     Published Date: Sep 22, 2014
NetApp Flash Pool is a storage cache option within the NetApp Virtual Storage Tier product family, available for NetApp FAS storage systems. A Flash Pool configures solid state drives (SSDs) and hard disk drives (HDDs) into a single storage pool, known as an “aggregate” in NetApp parlance, with the SSDs providing a fast response time cache for volumes that are provisioned on the Flash Pool aggregate. In this lab evaluation, NetApp commissioned Demartek to evaluate the effectiveness of Flash Pool with different types and numbers of hard disk drives using an online transaction processing (OLTP) database workload, and to evaluate the performance of Flash Pool in a clustered Data ONTAP environment during a cluster storage node failover scenario. In the report, you’ll dis cover how Demartek test engineers documented a 283% gain in IOPS and a reduction in latency by a factor of 66x after incorporating NetApp Flash Pool technology.
Tags : 
flash pool, fas storage systems, ssd, online transaction processing, cluster storage
    
NetApp
Published By: Micron     Published Date: Jan 12, 2017
Micron’s 9100MAX delivers on the NVMe promise with 69% better throughput and transaction rates plus much lower latency in PostgreSQL OLTP. Download this technical marketing brief to learn more.
Tags : 
    
Micron
Published By: Micron     Published Date: Jan 12, 2017
See how Micron® NVMe SSDs and Microsoft® SQL Server reach impressive OLTP transaction rates while drastically minimizing latency and simplifying configuration. Download this technical marketing brief now.
Tags : 
    
Micron
Published By: IBM     Published Date: Jun 08, 2017
This paper presents a cost/benefit case for two leading enterprise database contenders -- IBM DB2 11.1 for Linux, UNIX, and Windows (DB2 11.1 LUW) and Oracle Database 12c -- with regard to delivering effective security capabilities, high-performance OLTP capacity and throughput, and efficient systems configuration and management automation. Comparisons are of database installations in the telecommunications, healthcare, and consumer banking industries. For OLTP workloads in these environments, three-year costs average 32 percent less for use of DB2 11.1 compared to Oracle 12c.
Tags : 
ibm, linux, windows, telecommunications, healthcare, oracle database
    
IBM
Published By: IBM     Published Date: Jul 26, 2017
This paper presents a cost/benefit case for two leading enterprise database contenders -- IBM DB2 11.1 for Linux, UNIX, and Windows (DB2 11.1 LUW) and Oracle Database 12c -- with regard to delivering effective security capabilities, high-performance OLTP capacity and throughput, and efficient systems configuration and management automation. Comparisons are of database installations in the telecommunications, healthcare, and consumer banking industries. For OLTP workloads in these environments, three-year costs average 32 percent less for use of DB2 11.1 compared to Oracle 12c.
Tags : 
ibm, enterprise data, windows, linux, telecommunications, healthcare, consumer banking
    
IBM
Published By: IBM     Published Date: Sep 28, 2017
This paper presents a cost/benefit case for two leading enterprise database contenders -- IBM DB2 11.1 for Linux, UNIX, and Windows (DB2 11.1 LUW) and Oracle Database 12c -- with regard to delivering effective security capabilities, high-performance OLTP capacity and throughput, and efficient systems configuration and management automation. Comparisons are of database installations in the telecommunications, healthcare, and consumer banking industries. For OLTP workloads in these environments, three-year costs average 32 percent less for use of DB2 11.1 compared to Oracle 12c.
Tags : 
ibm, enterprise database, oltp, telecommunications, healthcare, consumer banking
    
IBM
Published By: Vertica     Published Date: Aug 15, 2010
If you are responsible for BI (Business Intelligence) in your organization, there are three questions you should ask yourself: - Are there applications in my organization for combining operational processes with analytical insight that we can't deploy because of performance and capacity constraints with our existing BI environment?
Tags : 
business intelligence, vertica, aggregated data, olap, rolap, sql, query, data warehouse
    
Vertica
Published By: Vertica     Published Date: Feb 20, 2010
For over a decade, IT organizations have been plagued by high data warehousing costs, with millions of dollars spent annually on specialized, high-end hardware and DBA personnel overhead for performance tuning. The root cause: using data warehouse database management (DBMS) software, like Oracle and SQLServer that were designed 20-30 years ago to handle write-intensive OLTP workloads, not query-intensive analytic workloads.
Tags : 
vertica, ec2, cdr, elastic, saas, cloud computing, data management, ad-hoc
    
Vertica
Published By: IBM     Published Date: Feb 02, 2009
A comprehensive solution for leveraging data in today's retail environment. From customer data to product placement statistics, retail organizations are constantly juggling information. As the sheer amount of data continues to grow, it becomes increasingly difficult to manage. Not only does data come in many different forms—such as reports, memos and e-mails—but often it’s scattered across multiple repositories.
Tags : 
ibm, ibm balanced warehouses, ibm master data management server, ibm omnifind, ibm industry data models, dynamic warehousing, retail buyer’s guide, leveraging data
    
IBM
Published By: IBM     Published Date: Feb 02, 2009
A comprehensive solution for leveraging data in today's financial industry. Most organizations realize that the key to success lies in how well they manage data—and the banking industry is no exception. From customer statistics to strategic plans to employee communications, financial institutions are constantly juggling endless types of information.
Tags : 
ibm, information management software, leveraging data, dynamic warehousing, data management, improve customer service, real-time risk analysis, analytics capabilities
    
IBM
Published By: CDW     Published Date: Aug 28, 2015
ACCELERATE, CONSOLIDATE, AND SIMPLIFY All-Flash Arrays (AFAs) are the preferred choice to accelerate high-performance OLTP and decision support/analytics applications on Oracle and Microsoft SQL Server databases. But not all AFAs are equal. Performance is just one dimension. XtremIO excels in all.
Tags : 
storage, servers, cloud computing, architecture, convergence, infrastructure, data center design and management
    
CDW
Published By: VMTurbo     Published Date: Apr 14, 2015
Seconds of Delay Have Multi-Million Dollar Consequences, Demand-Driven Control Changes the Game Principled Technologies deployed VMTurbo’s Demand-Driven Control platform in their virtualized data center. VMTurbo assures application performance while maximizing infrastructure utilization. Principled Technologies implemented VMTurbo’s recommendations in its virtual environment with an OLTP application, and saw significant improvements in performance. Learn how Principled Technologies achieved a 37% decrease in latency and 23% increase in orders per minute (OPM).
Tags : 
vmturbo, datacenter, performance, production, operations, vmware, memory, storage
    
VMTurbo
Published By: Idera     Published Date: Jul 30, 2013
At a very high and overly simplified level, SQL Server performance can typically be distilled down to a question of how well SQL Server is able to utilize Physical Memory (or RAM). For example, a server with a single, 20GB, OLTP database running on a SQL Server with 32GB of RAM will typically perform well – because it’s typically able to keep the entire database in memory
Tags : 
idera, sql, sql tools, server dbas, server management, ram, software development
    
Idera
Previous   1 2    Next    
Search      

Related Topics

Add Your White Papers

Get your white papers featured in the Data Center Frontier Paper Library contact:
Kevin@DataCenterFrontier.com