Author: Data Centre Management

7 Steps to Save Your Organization Money with Capacity Management

Effective capacity management can offer immediate savings for organizations. For some, it can reduce their hardware and software budget by 20-30%.

Due to organizations needing to reduce expenses, IT infrastructure management is increasing to help companies cut costs. Here, we’re showing you how you can implement capacity management to help keep IT costs to a minimum. Use these steps to achieve better capacity management. Continue reading “7 Steps to Save Your Organization Money with Capacity Management”

Advertisements

The Power of Private Cloud Computing

When companies move data, applications, and processes to the cloud, they must choose which infrastructure to use and decide how data and apps will be hosted and distributed. Companies have the option of private cloud computing, using the public cloud, or hybrid cloud computing solutions. Continue reading “The Power of Private Cloud Computing”

Tips and Best Practices for Data Storage Management

Data storage management isn’t what it used to be. In the past, hard drives stored information and onsite backups saved the daywhen hard drives failed. With these methods, organizations had more time to shop the market and compare storage upgrades.

Data storage companies today face the pressure of delivering data-related performance that’s bigger, better, faster, with greater agility. Data storage management has become more complicated and competitive with storage solutions for different workloads.

Continue reading “Tips and Best Practices for Data Storage Management”

IT Capacity Management –A Necessity for Businesses in Today’s Virtual Environments

Today, more companies are realizing the usefulness of IT capacity management tools. While these tools have been around for some time, there wasn’t much of a need for them – until now.

In the world of cloud and virtualized solutions, the high levels of automation and complexity in systems has created a greater need for capacity management in today’s IT environment. Companies need to be able to forecast capacity levels, plan ahead for IT operations, and predict risks in critical applications before they happen. This is where capacity management tools and IT capacity management planning come into play. Continue reading “IT Capacity Management –A Necessity for Businesses in Today’s Virtual Environments”

Private Cloud Storage: How is it different from public cloud storage?

More businesses recognize the affordability, reliability, and scalability of storing data in the cloud. But the question often asked is whether to choose public cloud storage, private cloud storage, or hybrid cloud services using local and off-site resources.

There are often misconceptions about private cloud infrastructure and storage. Here we’re helping you understand what private cloud storage is and what it is not. Continue reading “Private Cloud Storage: How is it different from public cloud storage?”

Goodbye Silos – Hello Automated, Virtual Cloud Data Storage Environments

Modern day enterprises are often trapped between two worlds: 1) the evolving world of cloud data storage, agile development, and unlimited scalability with next generation data storage companies, and 2) the traditional world with fixed software and hardware architectures combined with rigid application silos.

The former approach of separating data storage into semi-automated application silos takes us back to pre-industrial era production methods. Continue reading “Goodbye Silos – Hello Automated, Virtual Cloud Data Storage Environments”

Approaching the Billion IOP Datacenter

Are you still using traditional data storage systems? These systems are most often being pushed beyond their intended design. While users have always asked for more than just raw storage capacity, they’re now wanting multiple instances of terabytes, sub-microsecond latencies, and thousands of IOPs per deployment – all across multiple coexisting workloads.

Today, blogs and white papers are recognizing these needs and predicting the next generation datacenter to deliver seamless scalability. New products fill our imaginations with buzz-words and promises to solve the cloud problem with container-based, microservice-delivered systems on commodity software. However, in all of this, there still lies a major problem that no one is addressing.

How to approach the billion IOP datacenter.

We’ve entered a comfortable industry cadence going from kilobytes to megabytes, gigabytes, terabytes, and petabytes of storage, including going from kilobit to megabit and multi-gigabit of bandwidth.

But presently, no one has architected a data infrastructure that easily manages kilo-iops, mega-iops, and giga-iops.

Why is a system needed to deliver these types of capabilities? Answer: the industry’s cloud problem. Pressure continues to mount to save time and money with fewer employees and fewer resources to write, design, text, deploy, and scale applications up or down at fast speeds. Applications must peacefully live together without impeding on the other. All while being deployed in multiple frameworks (Docker, OpenStack, VMware, etc.) and multiple platforms (containers, bare-metal and VMs). These frameworks, platforms, and applications divide the datacenter and silos prevent both clear operations and efficient economics. Yet even now, the success experienced by the public cloud (Google, Azure, and Amazon Web Services) is posing a greater challenge to data infrastructure and data center management. The only way around this is having a universal data infrastructure to help consolidate the current mess.

To address this challenge, we’re using these key elements:

  • An elastic data and control plane
  • API-based operational model
  • Standard-based protocols
  • The power of NVDIMMs/NVRAM/NVMe (soon 3D XPoint)

Elastic Data and Control

So, how do we get an elastic control and data plane that can attach storage resources to numerous applications?

First, we created floating iSCSI initiator/target relationships allowing applications and their storage to move freely across storage endpoints. This allowed us to dissolve topological rigidity. As an application moves, we can drag its storage and manifest its endpoints on the right rack. Migrating apps can be served from many locations at the same time since we spread out all of the IOPs.

Next, we built an operational model to describe applications in terms of their service needs such as performance, resiliency, affinity, etc. During deployment, storage isn’t required to be handcrafted as LUNs on pre-set RAID levels and other legacy attributes which can be inflexible and cumbersome. With Datera, every volume has fluid characteristics from building up to tearing down.

Lastly, we don’t require deployment teams to spend time mapping elements of their storage system. Rather, we consolidate and deliver everything in one convenient architecture.

API-Based Operations Model

With Datera, developers can deploy storage without getting lost in the details. Simply describe your application needs (aka service levels) and roles (aka development, testing, production or QA, etc.), and then kick back and let Datera do the work for you.

Standards-Based Protocols

Datera is scalable and easy to use. It provides multi-tenant storage for containers, bare metal, VMs, etc. But others wonder, does it support every OS? When and where are drivers available for Linux, Windows, or even BSD?

No problem. If the OS supports iSCSI, Datera supports it. There’s no more hassle with proprietary drivers or client-side proxies. After all, who wants to track down a hundred instances of a driver?

The Power of NVDIMM/NVRAM/NVMe

Now, how do we reach gigiops? Reaching this level of performance is useless unless you resolve the three delivery challenges above.

After we capture a wide array of applications by their intent, fit a range of IOPs into one storage cluster, and have a control plane that makes configuration and re-configuration easy, we can add our last key ingredient – powerful NVDIMM and NVMe storage media to get us to gigiops.Datera can deliver high-performance and low-latency automatically to any application on any platform. But just wait until you see what we can do soon with 3D XPoint.

Let’s just say, Datera is the new easy button for your next generation datacenter. At Datera, we:

  • Made it to the billion IOP datacenter
  • Started with a hundred IOP disks
  • Then larger IOP SSDs
  • Then larger NVMe
  • Learned to scale better than the masses
  • Figured out the scalable architecture. While others made the mistake of proprietary drivers, we use standard iSCSI supported by Cinder.
  • Didn’t waste time trying to figure out where to put the control plane. We figured out how to scale and distribute it.

HOW?

  • No proprietary driver
  • Central control plane
  • Auto-tiering
  • Support for iSCSI and iSER (RDMA)
  • During 4k Random read, we can offer 150,000 IOPs per machine at 600MB/s (you’d need more than 6,000 machines for this!)
  • Our all flash array can offer you 500,000 IOPs (you’d need 2,000 machines for this!). This can be done in 57 racks if you fill your racks about 35U

WHAT?

  • Template based application deployment for VMs
  • Persistent container storage
  • Datera makes provisioning storage and standing up a large cluster simple and easy

WHO?

  • Datera – creator of Application-Driven Cloud Data Infrastructure

WHY?

  • Cloud carving for a few hundred IOP applications to multi-million IOP data jobs
  • Hosting providers have access to the cost of standard software hardware to scale their client’s needs without operational frustrations

 

CONCLUSION

  • If you’re interested in saving money and a way to “come home” from AWS or mirror your current AWS Deployment, we can provide elastic, inexpensive and scalable storage options. Others may too, but learn their limits first
  • Others may have the right parts to build the next generation datacenter, but only we know how to build the best data infrastructure for professional datacenter management

Continue reading “Approaching the Billion IOP Datacenter”