Skip to content

Feed aggregator

Nodester Running Node.JS 0.8.1

3-2-1 BLAST OFF!  Nodester now runs Node.JS version 0.8.1 by default if you upgrade to our latest CLI v1.0.2!  You can upgrade from 0.4.12 or 0.6.17 to 0.8.1 by simply opening your package.json file and changing the following entry to:

“node”:”0.8.1”

Commit your change to git and push it to nodester:

git commit -am “upgrading to v0.8.1”

git push

Hack the planet!  Thanks again to @_AlejandroMG for performing this update!

Categories: Open Source

VMTurbo Provides a Boost for Virtualizing Business-Critical Applications

VMTurbo - Virtualization Management Blog - Thu, 05/17/2012 - 15:00

In its 2011 Report, VMware Journey Adoption Insights, VMware states that 53% of its customers have virtualized mission-critical applications.  VMware customers—as well as other hypervisor users—are in different phases of their IT transformation journey, as described in VMware’s three-stage virtualization adoption model (see figure):

  • IT Production: Virtualization initiatives are in their infancy.  This early stage is characterized by IT sponsorship of infrastructure efficiency initiatives (i.e., lower power/cooling costs, better utilize data center floor space, and refresh/upgrade aging infrastructure), with a focus on virtualizing IT-owned workloads.
  • Business Production: In this phase, IT organizations focus more on virtualizing business-critical, multi-tier applications in production. While further efficiency and operational improvements are focus areas, Quality of Service (QoS) is a top goal, especially to win over business application owners/sponsors.
  • IT-as-a-Service (ITaaS): With its ability to deliver the highest business value and agility at the lowest possible cost, this is the phase that most organizations aspire to reach. This stage is characterized by organizations’ ability to leverage their virtualization success with application delivery to build out a highly automated cloud-computing environment based on a utility model.  

Enterprise Hybrid Cloud

Given trending data, many of you are likely in or approaching the Business Production phase.  That means your main focus over the next 12 months is to virtualize your business-critical workloads—which can be a speed bump or (worse) a stall point on your journey. Moving tier-2 and tier-3 applications into the virtual realm were likely much easier since these predominantly IT-owned workloads didn’t necessitate engaging the application owner in virtualization strategies and decision-making. Now that application delivery is tied to business-linked service level agreements (SLAs), performance and QoS are important metrics to assure.

That’s where VMTurbo comes in. VMTurbo delivers application visibility and performance assurance.  VMTurbo Operations Manager automatically adjusts resource allocation for virtualized workloads to ensure performance on the basis of priorities.  This allows users to assure service across the virtual and application delivery infrastructure, increase operational efficiency through automation, and improve resource utilization by maximizing physical infrastructure to satisfy current and future workload requirements.

download our 3.1 Release webcastIn the newly released Operations Manager 3.1, VMTurbo ups the ante by supporting application delivery controllers (ADCs), starting with Citrix NetScaler.  ADCs (also known as load balancers) are gateways to the applications they front-end, improving performance, reliability and security.  The ADC’s vantage point in the network aids in understanding many of the variables that can impact the performance and reliability of the applications they manage—especially those workloads that are multi-tiered and distributed.

Now that VMTurbo Operations Manager supports ADCs as targets, it can discover ADC’s virtual applications (also known as vServers) and manage them.  VMTurbo’s integration with ADCs allows you to automate the control and real-time resource allocation decisions for multi-tiered applications front-ended by these appliances. VMTurbo Operations Manager can intelligently prioritize resources and automatically scale-up or -down across a load-balanced application farm as demand fluctuates.

As virtualized environments scale, consolidation ratios increase, and tier-1 applications are virtualized (i.e., organizations reach that Business Production phase) IT needs greater visibility and analytics that drive automation of IT processes.  The virtualization of business-critical workloads introduces another level of complexity and ADCs by themselves may no longer be enough to maintain the expected performance improvements. VMTurbo Operations Manager will be key to providing application performance assurance in these environments—delivering a boost to get past any application virtualization speed bumps on your virtualization journey.


Categories: Companies

Synergy and Reinvention

VMTurbo - Virtualization Management Blog - Tue, 05/15/2012 - 15:00

Synergy picWe spent the last week exhibiting our wares out in San Francisco at Citrix Synergy – where more than 100 vendors and nearly 6,500 IT professionals convened to share, discuss, and learn more about the emerging trends in cloud computing, networking and virtualization technologies. Sitting in the audience listening to the keynotes on Wednesday and Thursday mornings, I was consistently amazed by the pace of innovation driven by the vendor-side of the IT industry. The multitude of permutations on delivering a desktop experience, micro-apps integrated with enterprise systems delivered over smartphones and tablets, new cloud and datacenter architectures… the list goes on and on (and if you’re interested in getting it straight from the source, you can do so here).

Mark Templeton described this as the reinvention of work, compute and business – and proclaimed that it is driven by technology. I think he’s right. We work differently – we intermingle work and life and constantly shift between each mode. Certainly our compute experiences have changed dramatically – new devices and cloud services are propelling it to new levels daily. It’s all resulting in new ways of doing business.

Corporate IT sits at the crossroads, both as the key enabler and with the responsibility to manage it all. That was the thought that reverberated through my head as I soaked in the keynotes. It was reinforced as I spoke with people on the show floor and in the breakout sessions throughout the week. IT management needs a reinvention as well. It’s something we’ve been saying at VMTurbo for a while now (you can read our take on that subject here). With the cloud revolution in full-swing (powered by virtualization), it’s no longer possible to “read and react” to every piece of data in the system. IT needs a smarter approach in order to embrace this reinvention.

Categories: Companies

London Calling: VMTurbo Participates in VMware Forum

VMTurbo - Virtualization Management Blog - Fri, 05/11/2012 - 19:14

VMTurbo recently participated in its first VMware Forum event in EMEA—at Wembley Stadium in London, home of the England national football team (that’s soccer for you Yanks!). The event was a great opportunity to meet up with a number of VMTurbo customers who were very interested to learn about the new VMTurbo 3.1 release. However, we also met dozens of organizations that are new to VMTurbo’s technology—many of whom have been testing the new release. It was great to get such positive feedback on VMTurbo Operations Manager. It was also clear that customers really do see the benefits of a management system that goes beyond simple monitoring and alerting; instead, providing specific guidance and actions to prevent and resolve performance issues in virtualized environments.

One of the most popular topics of discussion was around the VMTurbo planning capability. In particular one of our customers was explaining how they had used the VMTurbo planning function to more accurately assess the hardware requirements for a new project. Based on VMTurbo Operations Manager’s ability to optimize the use of existing hardware through more intelligent workload placement and accurate assessment of the impact of adding new workloads to their environment, this customer was able to significantly reduce the bill for new blade server, VMware, operating system, and backup software licenses.  As organizations look to drive more efficiency for their virtualized infrastructure, this use case is becoming a recurring theme with VMTurbo customers.

The event opened with an inspiring keynote by VMware's Chief Cloud Technologist, Joe Baguley, about, as you might expect, the cloud! The interesting take away for me was when Joe talked about the CIO (and the IT department in general) moving from Chief Infrastructure Officer to Chief Integration Officer.  In other words, transitioning from being the broker between physical infrastructure that IT is used to managing, to service provisioning from the cloud.  For me, the key to that being feasible is the management layer.  Clearly this is where VMTurbo provides so much value to our customers. Moving between infrastructure and cloud management all looks great on paper, but ensuring service levels, keeping within budget, charging back to the business, if required—these are all elements that CIOs need to have visibility to.    

VMTurbo is participating in the next VMware Forum in Frankfurt, June 13! We hope our customers—as well as other firms interested in learning how VMTurbo's game changing intelligent workload management is providing significant cost benefits and streamlining virtualization administration—will stop by for a demo! Maybe you will win an iPad like Leonel Fernadez (pictured) did in London!
Hope to see you there!  Cheers!

iPad winner resized 600

Categories: Companies

Blended Cloud Environments – A Financial Services Use Case

CloudSwitch - Mon, 09/26/2011 - 11:10

By Damon Miller, Director of Technical Field Services

One of the most interesting trends in cloud computing is the emergence of “hybrid” solutions which span environments that were historically isolated from one another.  A traditional data center offers finite capacity in support of business applications, but it is ultimately limited by obvious constraints (physical space, power, cooling, etc.).  Virtualization has extended the runway a bit, effectively increasing density within the data center, however the physical limits remain. Cloud computing opens the door to huge pools of computing capacity worldwide.  This “infinite” capacity is proving tremendously compelling to IT organizations, providing on-demand access to resources to meet short and long-term needs.  The emerging challenge is integration—combining these disparate environments to provide a seamless and secure platform for computing services.  CloudSwitch provides a software solution that allows users to extend a data center environment into the public cloud securely without modification of workloads or network configurations.  I’d like to discuss a specific example of how CloudSwitch delivered a solution which spanned environments in a corporate data center and external cloud.

A large financial services company approached us some time ago with an ambitious plan to leverage cloud computing as a strategic initiative within the organization.  Their goals were to reduce operating costs, improve responsiveness to the various business units, and differentiate themselves within the industry through technological innovation.  Security was a fundamental requirement and a number of risk assessment groups were involved throughout the design and evaluation phases of the engagement.  Finally, this company also wanted to leverage a traditional colo environment from their cloud vendor to provide high-speed access to shared storage while also supporting their traffic monitoring equipment.  After a period of technical diligence, we established a reference architecture which satisfied all internal security requirements while remaining true to the fundamental goal of moving to a dynamic cloud environment. The result was a true realization of the hybrid model.

In the customer’s reference architecture, there are three primary components:

  1. Internal data center environment hosting the CloudSwitch Appliance (CSA)
  2. Private colo environment hosting the CloudSwitch Instance (CSI) and CloudSwitch Datapath (CSD) as well as shared storage for cloud instances
  3. Public cloud environment hosting customer workloads

The CloudSwitch Appliance is deployed into the customer’s data center environment to allow central management of one or more colo environments.  Each of these environments supports an isolated cloud deployment, for example for a particular business unit. CloudSwitch’s virtual switch and bridge components are implemented for high-speed connectivity between cloud servers and shared storage.  Finally, the public cloud environment is used to host actual customer workloads (operating systems).  Network communication and local storage are protected through CloudSwitch’s secure overlay network and transparent disk encryption functionality.

This approach yields several benefits:

  • Multiple instances of this dedicated environment can be independently deployed to support different business units
  • High-speed access to the enterprise cloud environment is available since the colo environment is physically located in the same facility
  • Physical infrastructure can be deployed into the colo environment in support of cloud servers—for example, shared storage devices
  • Dedicated firewalls can be deployed and traffic inspection is possible, satisfying the security groups’ requirements

The reference architecture supports the organization’s high-level goals while remaining compliant with all existing security and regulatory requirements.  Cloud servers have high-speed access to shared storage as a result of the colo deployment alongside the public cloud environment.  All network traffic and storage is encrypted automatically through CloudSwitch’s security capabilities, and through CloudSwitch’s role-based access controls (RBAC) the security team has centralized control over who is able to access each cloud environment.  The end result is a deployment model which truly implements a hybrid environment combining resources from the public cloud with traditional colo resources to deliver a secure, scalable platform for dynamic computing.

Categories: Companies

Guided by Data

“Programming by debugging” is taking something similar to what you want and adapting it until it becomes what you want. Contivo Analyst 5.0 lets you be guided by data to create maps. It still continues to let you use a more traditional approach — working with a mapping specification, and then testing the map with data. But version 5.0 lets you embed data into the mapping process in new ways, to let you work more efficiently and to use data samples as specification information.

For instance, let’s say you are familiar with a source and target interface, but need to map a new transaction between them. If you have representative samples of source and target data, load them into Analyst. Load up to ten source samples, and ten target samples. You can then filter the source and target by the data, so that only the fields and groups with data are visible. This lets you focus on only the items which need mapping.

As you map between source and target, run map testing. You can use the details pane to rapidly navigate around the map and view the output. For instance, if you have several instances of an input group, and you have mapped a particular output field which will get populated from the source instances, you can with the press of a button navigate between the outputs for the target field.

You can also edit the sample target data to match the values you generated. (Remember that if you filtered the targets which have values, only the source and target fields which have data are visible.) Because Analyst 5.0 compares the output of the transform against the target sample data you loaded, by making the values in the target sample match the transform output, you’ve indicated that you “accept” those output values. Meaning that you just need to navigate to the a field which doesn’t match the target sample data — which you can do with the press of a button — until the output and the sample data match.

One of the ways you can use the ability of Analyst 5.0 to load data, edit baselines (the target data you loaded), and compare the output of the transform against the baseline is to perform mapping guided by the data, a form of programming by debugging.

Categories: Companies

Solving the Healthcare Crisis Takes Integration (and privacy \ security)

Huddled in a hotel in the epicenter of the healthcare debate, Washington, DC, over 300 people have gathered to figure our how to share patient data in an effort to improve long-term care while lowering expenses. While those appear to be opposing forces, as information recording moves from paper to digital and organizations such as hospitals, payers and providers, began to collaborate with one another, magic happens.

The greatest challenges, as you could guess, are how to fund and sustain the operational sharing in Health Information Exchanges (HIEs), and integration. The US government stimulus package has helped to jump start the HIEs, but they are slow going. According to the eHealth Initiative’s new report entitled “2011 National Forum on Health Information Exchange”, of the more than 255 initiatives, only 85 are in advanced stages of development and only 24 say they are operationally sustainable (read this as breaking even).

From the integration perspective, the report notes that the number of initiatives expressing technical systems integration as a major or moderate challenge increased from 97 in 2010 to 117 in 2011. This is more than a 20% increase. This is no surprise given so many HIEs and their constituents are finally actually trying to connect the providers, payers and patients together to collaborate. With architecture, applications and connectivity is never as easy as it seems!

Of interest to me from the security side is that there is little debate here; privacy, trust and the security of the information is a must – no debate, other than how individuals maintain their privacy and determine who have access rights.

Does your physician, dentist and other healthcare providers store your information electronically? If so, how do they share it among your providers?
Until next time…
Gary

Categories: Companies

Ship when perfect enough…

Head In The Clouds - 3Tera - Thu, 06/23/2011 - 00:55

As we’re closing down on the 3.0 beta, I found this blog post about the white iPhone.

When do you think is the right time to ship a product? Are the criteria different for consumer products and cloud products (and how)?

addthis_url = 'http%3A%2F%2Fblog.3tera.com%2Fcomputing%2Fship-when-perfect-enough%2F'; addthis_title = 'Ship+when+perfect+enough%26%238230%3B'; addthis_pub = 'peternic'; addthis_options = 'digg, slashdot, reddit, delicious, more';
Categories: Companies

Zeus customers speak out

It’s really important to us that we understand our customers and why they choose our technology. After all, moving away from time-tested solutions and go-to vendors is not something that’s done lightly. To help us gain more insight we commissioned...
Categories: Companies

Bucking the trend: steering clear of denial of service attacks

It was interesting to read a story in Computing recently which noted that the second half of 2010 saw a steep rise in the incidence of web attacks that caused downtime, with denial of service attacks up 22 per cent...
Categories: Companies

Retail in the Cloud

I read an interesting article on Computing’s website recently, that looked at which sector was set to benefit most from the cloud. This was revealed in EMC and the Centre for Economics and Business Research’s second cloud dividend report, which...
Categories: Companies

myCloudWatcher provides Managed Cloud Hosting today

Edward M. Goldberg - Tue, 10/05/2010 - 03:08

myCloudWatcher Provides Managed Cloud Hosting Today

We provide Cloud Deployment support that transitions into 24×7x365 Monitoring, Alerts, and Escalations for the life cycle of your projects.

myCloudWatcher implements “Best Practice” back-up solutions for your projects, including on-going challenges to your back-up systems.

Our team of Cloud Computing veterans, Cloud Specialists and DBA’s are available by phone. We manage your outside services such as ProjectLocker, New Relic, DNS Made Easy, GoDaddy. We provide convenient consolidated billing of cloud services, including DNS, CERTs, AWS, RackSpace, Monitoring, and API; we also manage your renewals. myCloudWatcher has the advantage of seeing the broad landscape: millions of users, thousands of servers, hundreds of projects….Visualize the trend.

Services Provided

  • GoDaddy

- Small low end servers for low cost needs.

  • RackSpace - CloudServers, CloudFiles, CDN

- Great for a fall back server farm.

  • AWS - EC2, RDS, Database Servers, DBShards

- The servers in the main farm.

  • Various ISP Servers

Life Cycle

  • New product updates (code push)
  • Back-ups
  • Server Rotation
  • Auto-scaling
  • Disaster Recovery

Alerts, Monitoring, and Escalations

  • myCloudWatcher continuously tunes all alerts to you application needs to provide early warning and trend analysis
  • Our detailed trend analysis of your deployments are road-maps for 100% availability
  • Our high level view of the internet allows us to distinguish service level problems from individual deployment problems. “Is it Facebook’s issue or is it my game?”

Implementation of Best Practice for Back Up Solutions

  • Back Up strategy challenged to test for accessibility and integrity
  • myCloudWatcher goes beyond the quick-fix to root cause analysis, focusing your engineers to quickly resolve the issues
  • Detailed root cause analysis translates to fast and precise disaster recovery

What you need

  • A full customer support person that is just a phone call away.
  • Direct contact with cloud specialists that are cloud computing veterans, including DBA’s
  • A person to call that can see The Big Picture for advice and Disaster recovery help.

One-stop Shopping

  • Management of Outside services such as New Relic, DNS Made Easy, GoDaddy, and more
  • We keep track of your many subscription renewals
  • Consolidated billing: One bill for DNS, CERTs, AWS, RackSpace, and an aggregation of monitoring systems.
Categories: Blogs