Skip to content

Feed aggregator

<a href="/the-automated-migration-of-windows-server-applications">The Automated Migration of Windows Server Applications</a>

AppZero Blog - 1 min 42 sec ago

Migrating and modernizing Microsoft Windows Server applications is a daunting task for many IT organizations.  During the last 15 years, physical and virtual server platforms have exploded in number and diversity.  Most DevOps teams struggle to manage a mix of Windows applications running on a diversity of 2000, 2003, 2008 and 2012 OS servers.  Maintaining production applications across this diversity of environments has become an operational headache.  With WS2000 platforms already beyond End of Service (EOS) and WS2003 platforms running in the final 2 years of extended support, the pressure is building from compliance teams and end users to reduce risk, save money, improve performance and modernize these servers and applications.

We estimate that there are still some 5 million W2000 servers running production applications. The number of W2003 servers exceeds 10 million.  By the time W2008 reaches EOS in approximately 3 years, there will be more than 20 million servers running on W2008. Six years after that, W2012 EOS will force more than 30 million servers to be remediated and modernized. Each building migration wave needs to be modernized in the same 3 three-year window, so clearly the migration and modernization of Windows Server applications is an ongoing and growing tsunami of effort that IT organizations must address.

Many IT organizations plan to address their migration workload by using expanded, sometimes offshore labour resources to manually reinstall applications in a new server environment. If we assume that it can take a person up to 1 month of effort per server (this is an extremely conservative estimate) to identify the applications on the server, develop an upgrade plan, find old installation media, update it to the new environment, migrate and then test and validate the migration, then some 60 million person months of effort will be needed for Windows migration tasks over the next decade. For the IT industry this amounts to a minimum of 5 million person years of pure migration effort, ignoring the costs and time for management, server decommissioning, along with costs for physical server upgrades and software. Regardless of the monthly labour cost assumption, the required investment in labour is significant, as is required head count.

At AppZero, we believe that intelligent automation is a better way to address the problem of Windows application migration and modernization. Using programmed intelligence to automate the monitoring and analysis of applications on a server, combined with automated migration tools, can reduce the effort to migrate applications from months to days. Even if only 30-40% of applications lend themselves to easy automation, automated migration builds significant momentum for corporate modernization projects and saves years of labour. The percentage of migrations that can be addressed through intelligent automation is also increasing over time as our system “learns” more from thousands of successful migrations. We believe there just isn’t enough time or skilled resources available for any approach, other than automation, to be viable.

Every day AppZero helps customers build internal centers of excellence for migration to plan and actually address the ongoing tsunami of effort. Call us if you would like to learn more about intelligent automated migration of Windows Server applications.

 

 

Categories: Companies

<a href="/iis_and_sql_migration_use_cases">IIS and SQL Migration Use Cases</a>

AppZero Blog - 1 min 42 sec ago

Getting off to a good start is important when facing the daunting challenge of eliminating hundreds or thousands of Windows 2003 servers from the enterprises' application portfolio. One way to accelerate migrations is to identify common application stacks and align them to repeatable Use Cases that AppZero can migrate quickly.

Both Microsoft IIS and Microsoft SQL Server 2005 and above are excellent candidates for migration using AppZero.

With the Microsoft IIS use case, one the major benefits of leveraging AppZero is the ability to not only migrate Web Sites, Application Pools, and .NET Frameworks, but also the ability to perform an in-place upgrade to a newer version of IIS in the same migration maneuver. AppZero has the ability to upgrade IIS 5.0, 6.0, 7.0, 7.5, and 8.0 to 8.5 which is the latest version running on Windows 2012R2. This agility allows dramatic time savings when migrating IIS and .NET based applications. AppZero accomplishes this by leveraging Microsoft Web Deploy (for IIS 6.0, 7.x, 8.x) and custom scripts (for IIS 5) to gather the list of features enabled on the source OS and enable the like for like features on the new destination server. Backwards compatibility components are also enabled to lessen the amount of post migration remediation and avoid the necessity to recompile applications. To the migration specialist performing the migration, the steps are similar to any migration whereby applications are selected including Web Server (IIS) as in-scope for the migration. IIS specific user accounts and groups are included for migration and unlike most migrations – the source servers IIS services do not require a stoppage to complete the process. The result is applications migrated to Windows 2012R2, rehosted to align with the new computer name, 32bit applications repointed to leverage WOW64 (Windows on Windows 64bit) and functioning web applications in less time than a manual migration of equal scope.

Microsoft SQL 2005 and above is also a great migration candidates for use with AppZero. (Microsoft SQL 2000 is left off the list as this version requires a multi-hop upgrade to get to a supported configuration. SQL 2000 must be upgraded to SQL 2005 and then onto a newer version for support on Windows 2008R2 and Windows 2012R2.) For Microsoft SQL 2005, AppZero recommends a migration with dissolve and then an in-place upgrade to a newer version of SQL. The reason for this is SQL 2005 SP4 is going end of extended support April 12, 2016 so it makes sense to during the migration to move to a newer version with a longer application lifecycle or support date. During the migration, all SQL components are selected for migration, services stopped on the source machine to remove file locks, and the SQL services then started within the AppZero migration container. Validate that MSSQL is functional by launching components like SQL Server Management Studio, ensuring all in-scope databases are present and online. Once this has been confirmed SQL Server can be dissolved into the destination operating system. If an in-place upgrade is required, mount the SQL version media and proceed with the upgrade. Note, the SQL Server installer requires that the bitness remain the same as the bitness of the source SQL version. If the source SQL Server bitness was 32bit, the in-place upgrade must also be 32bit. Microsoft SQL Server 2014 ships with 32bit support. The advantage of leveraging AppZero for SQL migrations is the fact that AppZero will bring across all SQL elements in one migration process including: Database Engine, Analysis Services, Reporting Services, Notification Services, Integration Services, Replication, and Management Tools. Migration does not require any remediation that would be normally associated with a manual SQL migration.

In both these use case examples, AppZero is able to dramatically reduce the amount of time required for the migrations while providing Operating Systems uplift, rehosting applications to the new server name, repointing the application artifacts (both file and registry level) to WOW64, and creating a virtualized application container that can be reused to additional destination servers.

Finding additional common application frameworks and proving compatibility / viability is a faster way to churn through the application portfolio and get quick wins established in which a migration program or factory can build upon. Click here for a partial list of applications that have been successfully migrated from Windows 2003 to Windows 2012 using AppZero.

 

Categories: Companies

<a href="/help-guide-us-to-the-promised-land">Help Guide Us to the Promised Land</a>

AppZero Blog - 1 min 42 sec ago

At AppZero, we work closely with customers to understand business requirements and drivers. The future roadmap and enhancement list for our products is driven by how customers use our solutions. AppZero technology has helped in Banking, Finance, Pharmaceutical, Healthcare, and Retail and many other industries. 

Even though Windows Server 2003 is now under time limited extended support, there are more than 10 million servers still running on it.

Let’s take a closer look at some of the ways AppZero can alleviate modernization pain points, across industries:

  1. Leave old OS in the dust:   AppZero can migrate an application from Windows Server 2000, Windows Server 2003, or Windows Server 2008 to a new Windows Server 2008 R2 or Windows Server 2012/R2 OS, without the pain and time commitment of reinstallation.
     
  2. Migrate and upgrade in one step: AppZero can migrate Microsoft IIS data and components from an old operating system to new operating system while upgrading to a newer version of IIS on the destination server, in one easy, magic step.
     
  3. Get on board the Cloud: AppZero can migrate enterprise Windows server applications to a public or private cloud like Azure, IBM Softlayer or Amazon.
     
  4. Distribute your applications: AppZero puts your applications in containers, and you can distribute these containers across different environments for different purposes. Turn these containers on when you need them for testing and development, for example, and then turn them off when you don’t. Compress containers and keep them – the “gold image” can be handy for application recovery and DR.
     
  5. Isolate your applications: An application can run in an AppZero container, isolated from other applications and abstracted from operating system drives on the destination server. For example, isolation is helpful in Citrix environments when operating system drives on the source server don’t match operating system drives on the destination server – eliminating the headache of drive mapping for you.

Have a business need that you don’t see here? Let us know! We’d love to hear from you and discover more great uses for AppZero. Please contact us, we benefit from your guidance.

 

 

Categories: Companies

<a href="/costs-and-risks-of-not-upgrading-your-windows-2003-infrastructure">The Real Costs and Risks of Not Upgrading your Windows 2003 Infrastructure</a>

AppZero Blog - 1 min 42 sec ago

We all procrastinate
When it comes to maintenance and busy work (disagreeable chores), we all procrastinate.  Delaying mundane chores to focus on new creative work, is human nature.

In the Information Technology (IT) world, no task seems more mundane and irritating than patching and upgrading an operating system (OS).  We’re all bugged by those persistent messages to patch and re-boot our OS.  

On the Server side, patching and maintaining an OS isn’t just a minor pain, it can have significant implications for the stack of applications that run or interact on a server.  Databases, libraries, communication stacks, plus performance and other layers of the server environment are frequently impacted by an OS upgrade.  Upgrades and software changes can have unintended consequences.  So it’s not surprising that IT operations teams sometimes take a short-term, perceived lower risk strategy: (let sleeping dogs lie) and delay applying OS patches to servers that are running smoothly and don’t have issues.

The End of W2003
However, what happens when it isn’t just another OS patch, but a vendor informs you it End of Service (EOS) for a whole OS?  As we know, (with steady drumbeat pre-warning) last July Windows Server 2003 went EOS.  A traumatic event from an IT operations perspective.  No more OS patches or fixes, and a future filled with security, compliance and audit risks. Plus expensive extended support fees.  Code Red. (Ouch, the damn dog is clearly awake, and has sharp teeth!)

In early 2015, there were an estimated 22 million W2003 servers still in production.  In the run-up to EOS and in the six months since, work on W2003 migration has caused more than a few sleepless nights for IT teams whose businesses rely on it. 

At AppZero, (where I am CEO and Chairman) we’ve been up too, working with customers, in Banking, Finance, Pharma, Healthcare, Retail and many other industries who run significant W2003 infrastructure and are focused on modernizing it.  Frequently, these customers have thousands and sometimes over ten thousand W2003 Servers, and they have hundreds of millions - even billions - invested in the critical applications that run on those servers. 

Regardless of the number of W2003 servers a customer is running, one experience seems universal: it is time to modernize.  IT audit is likely to raise the risk of running on an unsupported OS during the 2016 audit review.  Even with high cost extended support (which will double every 12 months for 3 years), businesses can only delay the problem, not fix it-WS 2003 servers simply can’t run forever.  Plus there are tangible benefits and features for customers in a new OS such as W2012.  Opening up the applications running on W2003 to new W2012 hardware speeds them up, lets them run in the cloud (if you choose), saves money and promises to make them more secure and stable.

What to do? 
Most of AppZero customers who are obsessed about W2003 modernization are taking 3 steps:

1)    Decommissioning: getting rid of some of the W2003 Servers and the applications running on them, if they don’t need them anymore. (throwing out the garbage –  removing up to 25% of the W2003 servers);

2)    Hand-working: Modernizing and upgrading the W2003 servers through manual upgrades where application vendors provide a simple, fast, predictable upgrade script to a new OS such as W2008 or W2012.  Manual upgrades still take significant time, (sometimes weeks per server), and user acceptance testing for the migrated application running on the new OS is still needed.  (as a rule, for approximately 20-25% of applications it may be more reliable to upgrade to an new OS with manual effort, if possible);

3)    Automated migration: For the remaining 50%+ of W2003 servers, tough work and analysis needs to happen, and automated tools can help.  Often install scripts are missing, there are no vendor supported upgrades, and no obvious migration path. You could just sand-box them all and run on W2003 until apps die or are decommissioned? But Sand-boxing tens, hundreds or even thousands of W2003 servers is a lot of risk exposure. Customers are using automated migration tools that will extract and migrate these apps to a new OS. Though success is not guaranteed, 70%+ of the remainder can likely be migrated, with the added bonus that you get a clear understanding why the 30% “non-migratable” apps, should be sand-boxed.


Why can’t all W2003 Apps move forward to a new OS?
So why are some Apps “non-migratable”?  Several factors impact the migration of W2003 applications to a new OS.  While over 70% should move using one automated technique or another, factors like unsupported Java libraries, encryption, hidden authorization keys and other factors which are not supported in the new OS environment impact the ability to migrate.  The factors that prevent application migration can be highlighted using automated tools, and can provide clear justification for sand-boxing servers. The few apps that cannot be migrated can run to End of Life on W2003, while broader app replacement strategies are developed.

What’s the cost of not upgrading your W2003 infrastructure?  
Turns out, in IT doing nothing, costs money (not surprising)!  Many of AppZero’s customers, (in both regulated and unregulated industries) are paying extended support fees to Microsoft annually for up to 3 years (until July 2018, if they need them).  Extended support fees may exceed $3,000 per server by 2018.  

In the event a W2003 server is unsupported and a security breach or outage occurs, the cost of business interruption per day can be hundreds of thousands and even millions of dollars for many customers. Plus, fees for running data centers out of compliance can wreak havoc with businesses. In those cases, IT departments are essentially betting on their businesses and hoping nothing happens.
 
Migrating W2003 apps by hand also costs money and burns time (often weeks of work/server).  Automated migration tools can save more than 15 days of labor and thousands of dollars per migration. If a customer has hundreds and even thousands of servers to migrate, those savings add up.

For example if you have a hundred servers you can move with automation: (100 servers) times ($1,000/server in reduced extended support
+$10,000/server in reduced manual migration effort) => $1M in migration savings. 

When it comes to upgrading OSs, both for W2003, and the pending W2008 migration cycle, there’s real work to be done.  By working smart you can save millions of dollars, save time and get your business on a new OS infrastructure that will last for years, while simultaneously creating a path to future application modernization and cloud implementation.  There really is no reason to procrastinate.

Categories: Companies

<a href="/automating-legacy-modernizations-with-migration-containers">Automating Legacy Modernizations with Migration Containers </a>

AppZero Blog - 1 min 42 sec ago

Industry Trends brought us Here
The relentless doubling of compute horsepower every 18 - 24 months known as Moore’s Law is one of the trends that has shaped the IT industry. Machine virtualization and cloud computing have combined to reduce the time it takes to create a new machine that harnesses the latest in computing power to nearly zero. These mammoth forces plus a bit of application developer productivity have resulted in a huge explosion in the number of machines running applications over the past 10 to 15 years.

The benefits of staying current and adopting the latest foundational technologies are undeniable. Faster compute, low-cost network and storage, reductions in time to market, agility - both technical and business- is a powerful amalgamation of trends that lays the groundwork for a competitive advantage for many businesses. Figure out how to leverage new technology before your competitors do, or be prepared to find yourself in the unemployment line.

The Growing Old Challenge

Let’s face it, everyone knows growing old comes with some challenges.

“Growing old is like being increasingly penalized for a crime you haven't committed. ”
 Pierre Teilhard de Chardin

When trying to stay current with new technology, the fly in the ointment is the application; which once installed, injects its components into the many nooks and crannies of the Operating System, making it extremely difficult to move.  Applications are not designed to be moved once in production – they are basically stuck.  Of course, the application can be updated with new code and new functionality but moving it tends to be really hard and costly.  As the application ages it is stuck running on an older OS and older supporting infrastructure. In a few years the shiny new application exploiting the latest advances in infrastructure becomes out of date; an albatross, a burden, and possibly a detriment to the business.

Aging Infrastructure in the Spotlight
One only has to look at the large number of machines (23.8M) that were still in production on Windows Server 2003 in the summer of 2014, just 12 months before the End of Support (EOS) for that OS. Upon reaching the EOS date there were believed to be 15 million machines in production. Most of the 8 million reduction in production applications that occurred have largely been attributed to decommissioning and the wholesale replacement of applications vs. migrating the applications to a supported OS (WS2008 or WS20012). In addition to running on an OS that was more then a decade old, many (45% estimated) were still running on physical servers that were almost as old. Multiplying out a doubling in power every 2 years, these apps were executing in environments that were 5 times slower and 3-5 times more expensive than what is available today.

Waiting until the EOS date has more severe consequences than the effects of running on aging infrastructure. Many enterprises have chosen to pay for an extended Microsoft Custom Support Agreement (CSA), often costing $3M+ in year one and projected to exceed $10M in year 3. Even if an enterprise does not select the punitive and escalating cost of a CSA, the lack of available security updates increases risk and vulnerabilities to applications and potential unplanned downtime. Running on aged hardware is also more expensive, less efficient, and increases downtime due to MTTF rates. Surely, if application mobility were a less expensive proposition, these applications would have been moved to a modern environment long ago.

Aging Infrastructure Expands Exponentially
It becomes clear as time marches on, that the combination of increased processing power, VM to server density, self-service cloud computing (with near instantaneous machine provisioning), and falling costs, causes an increase in the number of server instances created every year. There are estimates that when Windows Sever 2008 nears EOS in 2019 there will be 57M or 2.5 times as many machines running production applications as was claimed the year before EOS for WS2003. A simple doubling puts the WS2012 population at well over 100M in 2024.

The above excerpt from the 2014 IDC study “The Cost of Retaining aging IT infrastructure“ shows the dramatic increase in Logical Server Installed Base going from 25M in 2004 to 85M in 2013 or a 16.5% CAGR. If the trend continues at the same rate, there will be 456M Servers by 2024.  One could argue that the growth rate is increasing because of cloud adoption but even using the 10-year trend line results in an extremely large number of machines.

Containers
Containerization is the process of encapsulating an application in a package with its own operating environment. This provides the benefits of isolating the application and a micro OS for quicker starts or moving a self-contained application from one machine to another. The containerized application can be run on any suitable physical or virtual machine without any worries about dependencies. An isolation layer encapsulates and maintains the separation of application from OS and machine. 

Migration Containers
Migration Containers are purpose-built / specialized containers that can extract existing applications that are already installed on an Operating System. Once an application is migrated into a container it can be moved to other machines. Advanced migration containers will allow for OS “up leveling” when extracted from an older OS and loaded onto a newer OS.  For example, an application could be moved from a Windows Sever 2003 OS into a migration container and run on a Windows Server 2012 OS.  Once an application as been packaged into a migration container it is liberated from the underlying infrastructure and the aging predicament is solved.

The ability to decouple existing applications from the OS allows it to be completely portable. The new migration container can be moved within a datacenter, across geographic distances, moved onto or off of a cloud. In essence, the application can run anywhere the target OS is running. Migration containers can also be turned on and off rapidly allowing flexibility to spin up an application in isolation, use it for any duration, and quickly tear it down. 

Summary
It is clear that there has been a dramatic rise in the number of machines in production due to Moore’s Law, machine virtualization and cloud computing. The economic and competitive advantages of running a business on a current platform are clear. Once installed and in production, applications tend to be frozen in time and difficult to move or migrate to a newer platform. Containers begin to address the portability or mobility challenge of staying current. Migration containers are purposed to solve the migration challenge and can extract and containerize existing applications. They also allow existing application to fast forward into the future and enable the business to keep its competitive edge. 

Categories: Companies

<a href="/m2c-teleports-existing-enterprise-apps-to-the-cloud">Machine to Container (M2C) Teleports Existing Enterprise Apps to the Cloud </a>

AppZero Blog - 1 min 42 sec ago

It's been a busy time for tech's ongoing infatuation with containers. Amazon just announced EC2 Container Registry to simplify container management. The new Azure container service taps into Microsoft's partnership with Docker and Mesosphere. You know when there's a standard for containers on the table there's money on the table, too.

Everyone is talking containers because they reduce a ton of development-related challenges and make it much easier to move across  production and testing environments and clouds. Containers are the technology that, many believe, deliver on the long-promised portability in the cloud to avoid vendor lock-in, and put developers, system administrators and their enterprises in the driver's seat.

Getting up to speed about containers is not easy, but the good news is, there is a way for developers and their enterprises to become an instant part of the container revolution.

It involves moving the applications that exist in enterprises today into containers so they can "build, ship, and run any app anywhere," as Docker says.

The key is to move just the application, not the entire machine -- and not an image of the machine. Let's call this approach "Machine to Container" or M2C. Packaging up an existing application into a container so you can move just the application and not the operating system makes them completely portable. You can leave the app in the container to be moved again, use it as a distribution system, or dissolve the container and leave your app installed on the new machine or cloud. M2C can be viewed as evolutionary rather than revolutionary.

In fact, the model for encapsulating physical machines into virtual machines (VMs), each with an operating system and programs, dramatically changed the makeup of enterprise data centers. Machine virtualization allowed developers to spread workloads around on servers that weren't being fully utilized. VMware became the commercial pioneer of machine virtualization with the automation to manage these new environments leading to its dramatic growth. Cloud computing evolved from this virtualization, bringing efficiency, automation and scale of operations for tremendous cost reductions.

Then along came Docker, with some  800 million downloads afforded by its container tech, arguably another form of virtualization. Like virtualization, containers redefine how applications are deployed. Sub-dividing compute resources has huge advantages and everyone knows it. Google is now rattling sabers with  Kubernetes, its version for container orchestration. Startup  CoreOS, which has its own containers, has adopted Google’s management and provisioning approach to compete with Docker. Even VMware is transforming offerings to participate in this emerging market.

 

While there might not be a guerilla in the container market anytime soon, it is clear that the container approach is here to stay. With a new computing paradigm, there are many opportunities to add value, and it's clear the market will evolve from a single product offering into a robust ecosystem of companies serving this market.

So what's the fastest way to start using containers? One of the  biggest challenges to the adoption of virtual machines was how to convert the old physical machines to these new virtual machines in order to realize the cost savings and agility that come with machine virtualization. A set of companies emerged in mid-2000 to help system administrators migrate from physical to virtual machines. The same problem exists when it comes to containerizing existing applications.                      

With 99.9% of the applications in use today not containerized, it makes sense to get these applications into the container world fast for all of the reasons virtualization, containerization and the cloud make sense. We want to move from machine to container by migrating existing applications from inside a physical or virtual machine to a container.

So how does machine to container work? Migrating existing machines and applications into containers can be compared to image migration. Image migration migrates an entire machine, including the OS and the applications. Post migration remediation includes removing physical machine device drivers and replacing them with suitable virtual devices. While this approach works well for physical-to-virtual and virtual-to-virtual use cases, the unit of work is the whole machine and there is no visibility into any of the layers inside a machine (operating system, management, web-server, app-server, database server, etc.).                   

On the other hand, M2C migrates apps by separating them from the operating system and copying them into a container. Once the application (with all its binaries, configuration, data and all its dependencies) is separate from the OS and replicated from the source machine into a container, the resulting package can be copied or provisioned to another machine, including a newer platform like Windows Server  2012. 

 

The destination can have different machine characteristics (physical, virtual, on-premise, or cloud) and different characteristics inside the machines (OS, management apps, Terminal Services, application, etc.). The unit of work is granular (an app), allowing the characteristics of the host machine to change, but the application configuration within the machine can change as well. This flexibility results in the agility system administrators and infrastructure architects seek. It avoids getting locked into a deployment stack, and lets one keep up with new and emerging deployment offerings, like new OS releases, data center management suites, and cloud offerings.

Most of today's enterprise applications are Windows based, and, by nature, difficult to move. With containers gaining momentum, M2C can be viewed as the box that moves this software into the modern world.

Categories: Companies

<a href="/back-to-business">Back to Business: a Round-up of Resources for Windows Server 2003 Migration </a>

AppZero Blog - 1 min 42 sec ago

Transitioning from the freedom of summer to the structure of back to work and school can be tough for all of us. Yet, September is a time of renewal, a time to refocus on our goals and remember that, sometimes, making big changes begins with small ones. The key is not to overwhelm ourselves, but to keep moving forward, often with small steps at a time.

We know that the big work of migrating hundreds of thousands of machines running Windows Server 2003 -- including 175 million websites or one fifth of the internet, according to recent numbers provided by Internet services firm Netcraft --  still lies ahead.

It's easy to get overwhelmed by the daunting task of upgrading systems. At the same time, many vendors and their service partners have been working for two or more years to ease the transition. We at AppZero have had the time and the real world experience of working with CIOs, IT directors, program and project managers, developers, supply chain partners -- in fact every IT role imaginable, in nearly every industry and size of organization.

 Best practices for Windows Server 2003 migrations have never been more plentiful. Webinar recordings, whitepapers, case studies, how-tos, feature roundups, blog posts. There's no shortage of information and quite a lot of talent out there. Here are some of our favorites from the media, the AppZero partner community and beyond:

September brings changes in weather and attitudes. We can hit the snooze button or we can jump up and embrace the challenge, but either way, it's coming. The best solution for those of us responsible for technology that supports our enterprise goals is to move from Windows Server 2003, protect our systems from hacking attempts and migrate to the new, modern world. How to get started? While she may not have been talking about cloud and datacenter migration, Amelia Earhart said it best, "The most effective way to do it, is to do it."

Categories: Companies

<a href="/small-world">Small World</a>

AppZero Blog - 1 min 42 sec ago

This post was suggested and contributed to by Emad Steitieh, with contributions by Daniel Kucinski.

Internationalizing software has been overlooked by companies in years past. Many available software applications lacked the ability to work with different languages or locale variations such as calendars, currency and numbering conventions. Although North America is a rich market and many companies thrive to take market share, the international market brings new opportunities for businesses - this is why more and more companies are working hard to satisfy all tastes both domestically and internationally.

With the recent end of support for Windows Server 2003, the focus on application migration and cloud onboarding has increased as enterprises move their applications onto more modern [supported] platforms in datacenters and the Cloud.  Windows Server 2003 has been deployed worldwide for the last 12 years and the applications running on the old Operating Systems (OS) are written in many different languages.

Windows Server 2003 and other OSs have matured to provide facilities for software developers to enable multi-language features in their software. OSs such as Microsoft Windows operates in two modes: ANSI and Unicode. Every system API function that accepts strings has these two single byte and double byte versions. Thus the Windows environment enables the user to work with keyboards in different languages with different user interface elements such as dialogs, messages, notifications, and fonts that are not just in English. At AppZero, we have also decided to meet the persistent demand from our international customers and enable our software to work with non-English Windows environments. After the release of AppZero 6.1 earlier this year, our customers all over the world can now enjoy the convenience of migrating their applications from non-English sources to non-English destination machines. This includes several key components such as:
 

  1. Creation of VAAs (containers)under non-ANSI names and paths
  2. Finding and tethering of source applications that have non-ANSI names, file paths, file content, registry keys...etc.
  3. Seeing logs and configuration files that contain non-ANSI characters


For example, migrations with AppZero are agnostic to OS specific regional settings, encodings and/or language that the application is programmed or configured in. The common components that may be using double byte characters are the following:
 

  • Server host name
  • Credentials and accounts
  • File paths and registry entries
  • Unicode support for shortnames
  • Menu items
  • Different delimiters used in other localities


Additionally, this functionality extends to AppZero’s COTF functionality so that changes can be made in configuration files to support application rehosting. With double byte support, AppZero now enables application migration and cloud onboarding of all application types around the world.
 

Categories: Companies

<a href="/windows-server-2003-late-for-important-date">Windows Server 2003 - I&#039;m late, I&#039;m late, for a very Important Date!</a>

AppZero Blog - 1 min 42 sec ago

Windows Server 2003 End of Support is here and there is little most enterprises can do at this point to change the fact that they are now dependent on an unsupported operating system. Here we are at Microsoft's World Wide Partner Conference again, muttering "I'm late, I'm late," just like the herald-like white rabbit of Lewis Carroll's Alice's Adventures in Wonderland, (We have a cool White Rabbit twitter campaign going this week - check it out) except that we cannot manipulate time. This is the event where, last year, there were many sessions highlighting processes, tools and partner ecosystem to help companies migrate off of Windows Server 2003. Analysts, the media and Microsoft were talking in terms of close to 20 million machines in production still running Windows Server 2003.  Back then, the opportunity was characterized as a Y2K situation that would result in as much as 45 billion dollars spent helping to remediate this event.

In the past year, there have been thousands of articles, blogs and other content aimed at educating the market. Check out our blog series, "Everything You Ever Wanted to Know about Windows Server 2003 Migration." For most companies, the deciding equation to move or not boiled down to assessing the risk of running on an unsupported OS increasingly vulnerable to security attacks or paying to remediate the risk. The key question: Do I pay $2,000, $3,000, $4,000 or more to migrate, or can I isolate my apps from bad things, not get hurt and save the money?

With zero time left, many enterprises have chosen to delay remediating or migrating. Historically, risk and negative outcomes are hard for most people and organizations to quantify (one of the reasons the insurance industry is so big and profitable).  In the financial collapse of 2008 few if any financial institutions (okay, maybe Goldman Sachs) understood the risk of being involved in the US mortgage market. A good question: What is the cost of breach or an attack on those machines running Windows Server 2003?  They have been running fine for years. We now know there is much evidence that applications running on older operating systems have high amounts of downtime, costing the business unplanned time and money. But as Alice articulates: "I went along my merry way, and I never stopped to reason. I should have known there'd be a price to pay, someday…"

There are a couple forms of delay that we see happening among customers attempting to lower the security risk of Windows Server 2003. First, many large enterprises just kick the can down the road by writing a check to Microsoft for extended support via a Custom Support Agreement (CSA). There are a few things you have to do to qualify; and this route is expensive and does not address the underlying problem. It does get you support and patches during the term of the agreement. A material risk in this approach is that organizations with a large number of machines, 5,000-10,000 or more, will not be able to remediate that many applications before the CSA expires. The amount of disruption in an organization necessary to solve this problem is very large and change management processes will slow the move.

A second approach is to isolate the machines that will have this increased venerability by adding a security layer and/or moving them to the cloud.  This is likely a feel good approach and obscures the risk but does not solve the underlying problem.  Who validates and protects the isolation approach? What happens when the isolation layer is flawed? It is understandable that some organizations adopt this approach as a stopgap measure. They are trying to buy time to address the core of the problem.

We've been working closely with more than 100 system integrators on the Windows Server 2003 challenge, companies that specialize in helping customers modernize, consolidate, relocate or move mission critical applications and have a deep understanding of what the enterprise is experiencing when it comes to Windows Server 2003 EOS.  AppZero conducts an annual "State of Readiness for Windows Server 2003 End of Support" survey, now in its third year, as well as frequent polling surveys, including a most recent update conducted on June 18.

The headline from this polling survey, "Customers Didn't Budget for Windows Server 2003 End of Support," shows just how far down the rabbit hole customers are. When asked the about the timing of projects, more than two thirds of our partners see projects as yet to start or in early definition/planning phases.



When asked about how many machines are going to be retired (shut off or replaced by new servers) most did not know. This makes sense because many projects are still in the discovery phase or yet to start.

When asked about how the project was going to be accomplished, 65% said they would use AppZero, but it is important to note that the audience included many AppZero partners.

When asked why they are not moving, most simply say money is the issue. "I don’t have budget…"

Why companies don't have the budget brings us back to why we went down the rabbit hole in the first place and the central theme of our hero's tale of survival and adapting to change. Like the frantic, harried White Rabbit, we're often playing defense, operating from anxiety and the pressures of having a lot of jobs to do. And like the transition from childhood to adulthood, technological change can be complex and difficult. In the end, we brush the dirt from our fluffy white tails and emerge enlightened.

Categories: Companies

<a href="/five-tips-for-troubleshooting-app-migrations">Five Tips For Troubleshooting Application Migrations</a>

AppZero Blog - 1 min 42 sec ago

This post, authored by Troy Hummon, Senior Solutions Architect at AppZero,  is the second guest post in our new blog post series, "Everything You Want to Know about Windows Server 2003 Migration." 

In our last technical post, we covered migrating custom or bespoke applications where the source code may be missing. In this post, I will highlight  basic troubleshooting tools and techniques to utilize in your migration troubleshooting process. 

Occasionally, when you migrate fully functional applications from one server to another, you may encounter functional anomalies that prevent the application from functioning properly. When this occurs, there are some basic troubleshooting steps that we recommend using to investigate, debug, and ultimately resolve the issues that are preventing the application from working properly. 

Windows Task Manager
The Windows Task Manager displays the programs, processes, and services that are running on your computer. You can use Task Manager to monitor your computer’s performance or to terminate a program that is not responding. If you're connected to a network, you can use Task Manager to view network status. If more than one person is connected to your computer, you can see who is connected and what they're working on, and you can send them a message.

To start Task Manager, right click in any open area of the taskbar and select ‘Start Task Manager,’ or press <Ctrl><Shift><Esc>.  Task Manager is often used to view when an application launches, whether it is still running, has launched other applications or processes, and to terminate applications or processes that have become non-responsive and/or start an application testing phase over.

During the migration of an application, the tether process, if enabled and active, can make the process seem to run slower due to components that are being tethered over real-time or in the background.  I often use the Tether Monitor icon status and Task Manager to show the CPU utilization of the tetherproxy process to determine whether tether activity is occurring.  It’s normal for tether activity to occur during initial test phases.

Windows Resource Monitor
Resource Monitor is a system application in some  Windows operating systems which displays real time information about the use of hardware and software resources.  Open Resource Monitor by clicking the Start button; in the search box type Resource Monitor or run resmon.exe. Alternatively, you can launch Resource Monitor from the <Performance> tab of Task Manager by clicking the Resource Monitor link/button.

For application migrations, Resource Monitor is useful for monitoring file and network I/O of an application to determine whether it’s reading/writing to the file system and/or network.



Windows Services
A service is a specialized program that performs a function to support other programs. Many services operate at a very low level and need to run when no user is logged on; for this reason, they are often run by the SYSTEM account (which has elevated privileges) unlike ordinary user accounts.  We’ll review how to use Windows Services to view migrated services, start, stop, and configure them. Another great method for viewing services on your computer is through the <Services> tab of Task Manager.

You manage services with the Windows Services snap-in for Microsoft Management Console (MMC), shown below. To view this snap-in, type services.msc at a command prompt. You need administrator privileges to gain full functionality in the Windows Services console. Running as a standard user, you can only view service settings.

During application migrations, the Windows Services is useful for stopping migrated services so you can exercise the services on the new destination server while tethered, to ensure that all the binaries that comprise the service have been transferred. Services can also be started and stopped inside the <Services> tab of the AppZero Admin Console. If you want to view additional configuration information/details for the service, you will need to use the Windows Services or SC Query.

Windows Event Viewer
Event logs are files that record significant events on your computer, such as when a user logs on or when a program encounters an error. Whenever these types of events occur, Windows records the event in a log that you can read by using Event Viewer. Advanced users might find the details in event logs helpful when troubleshooting problems with Windows and other programs.

Depending on the severity of the event, Events are classified as Error, Warning, and Information. An error is a significant problem, such as loss of data. A warning is an event that isn't necessarily significant, but might indicate a possible future problem. An information event describes the successful operation of a program, driver, or service.

Event Viewer is a tool that displays detailed information about significant events (for example, programs that don't start as expected or updates that are downloaded automatically) on your computer. Event Viewer can be helpful when troubleshooting problems and errors with Windows and other programs.

Computers that are configured as domain controllers will have additional logs. System events are logged by Windows and Windows system services. Events can also be forwarded to an event log by other computers.  Applications and Services logs vary - they include separate logs about the programs that run on your computer, as well as more detailed logs that pertain to specific Windows services.

You must be logged on as an administrator to perform these steps. If you aren't logged on as an administrator, you can change only settings that apply to your user account, and some event logs might not be accessible.

The Windows Event Viewer is an MMC snap-in that tracks information in several different logs. Windows Logs include:

  • Application (program) events, classified as error, warning, or information
  • Security-related events. These events are called audits and are described as successful.
  • Setup events. Computers that are configured as domain controllers will have additional logs displayed here.
  • System events. These are logged by Windows and Windows system services.
  • Forwarded events. These events are forwarded to this log by other computers.

To open Event Viewer, click the Start button | Control Panel | System and Security | Administrative Tools | then double-click Event Viewer.  Next, click an event log in the left pane and double-click an event to view the details.

Windows Component Services
I often use the Windows Component Services because the Windows Event Viewer and Windows Services are built into the same management console view.  The real value, however, is the ability to easily examine the COM+ Applications and DCOM Config of the destination server, as well as the [source] server from which you are migrating applications.

After you install the operating system, you must configure the system to enable Component Services administration on your network. You enable administrator control for the System Application. You also configure Component Services to recognize your network by making computers visible to Component Services (Make Computers Visible to Component Services)and enabling component communication across computer boundaries by configuring DCOM (Enable or Disable DCOM).  These configuration requirements are crucial to proper administration of Component Services.


Notes:
Depending on the needs of your system, it might be necessary to perform configuration tasks beyond the tasks that have been outlined here. Although you may be able to configure the following properties at a later time, it is important to be aware of them early:

• The Component Services snap-in requires the Distributed Transaction Coordinator (DTC) service to be running; therefore, you must configure the DTC. For more information, see Manage Distributed Transactions
• If enhancing system scalability is a goal for you, you can configure Component Services to shut down application processes that are not being used. For more information, see COM+ General Tasks

In a future post, I’ll share tips for Advanced Troubleshooting including: Process Monitor (ProcMon), Process Explorer (ProcExp), Global Assembly Cache (GAC) Tool and Service Control (SC) Utility. Watch this space…
 

 

 

 

 

 

Categories: Companies

Divya Ghatak, our Chief People Officer, Joins Watermark’s Board of Directors

Good Data - 12 min 13 sec ago

GoodData is proud to announce that Divya Ghatak, our Chief People Officer, has joined Watermark’s board of directors. Watermark’s mission is to increase the number of women in leadership positions, and Divya’s passion for building diverse and inclusive teams will be a huge asset to their organization.

Divya oversees global people operations at GoodData, where she combines her extensive leadership experience developing strategic people operations for diverse global businesses with a special focus on employee engagement, talent and leadership development, corporate culture and organizational collaboration. Recently, I sat down with Divya to chat with her about her thoughts and feelings about this latest achievement.

Q&A:

How did you hear about Watermark, and what appealed to you about them?

“I first learned about Watermark through the diversity initiatives that I worked on while I was at Cisco, and more recently I attended their Watermark Conference for Women along with several other GoodData employees. In terms of what drew me to them, the scale and the level of coordiation is unprecendented compared to what I’ve seen in the past, from the diversity and quality of speakers to the level of organization and reach of the program. I also love the fact that this is a completely mission driven nonprofit dedicated to a most relevant cause of our times!”

What will your duties be as a Board Member at Watermark?

“My primary initiatives will be around furthering the mission of Watermark, which is to increase the number of women in leadership positions everywhere, not just in tech. I am excited about being able to represent Watermark’s amazing membership community and at the same time, continue to leverage my leadership role in enhancing Watermark’s public standing. My board duties will involve committing dedicated time to prepare for BOD and/or committee meetings, actively serving on at least one standing board committee, and attending social events, conferences and speaker series.

The business case for building diverse and inclusive environments has never been stronger, and I’m excited about using my talents, network and knowledge base to bring this to the forefront with Watermark; they provide an amazing forum for inspiring and developing people.”

What are your top 3 goals as a Watermark Board member?

  1. Attend key events that foster Connection – Watermark creates a safe and comfortable space where professional women truly come together and make meaningful connections, pursue new opportunities, problem solve, empathize, de-stress, and celebrate each other’s successes.
  2. Help generate resources for development through programs that offer: opportunities for continuous learning, promote innovation and growth from top thought leaders in monthly webinars, speaker series and 1⁄2 day conferences.
  3. Advocacy – Leverage my network and connections to amplify our influence and actively work towards the next quantum leap in our individual and collective success. Our goal is to increase representation of women at executive levels to drive innovation, human development and economic growth.

What can the women of GoodData look forward to learning through your involvement with Watermark?

“This new position will directly support the Women in Leadership program led by Marlene Arroyo at GoodData. We have been lucky to have the sponsorship of our CEO Roman Stanek in creating an environment that engenders diverse and inclusive teams. Watermark hosts 50+ events a year, which will provide huge opportunities to connect GoodData’s employees with women in leadership. By attending these events, women at GoodData can gain practical knowledge, such as how to improve their negotiation skills as well as get closer to their dreams and aspirations through targeted programs, development and sponsorship”.

What can the men of GoodData look forward to learning through your involvement with Watermark?

“My passion is to build amazing experiences where talented people can perform at their best, and that goes just as much for men as it does for women. While a lot of Watermark’s events tend to be women oriented, we know we must get men in on the conversation. There’s an article in the New York Times that I love that talks about including men at these events to make a dent in the lack of women in leadership roles. When we’ve had external speakers talk about these topics, the GoodData men have attended with passion. To quote - ‘sisterhood is not enough, workplace equality needs men too!’”

Categories: Companies

Transforming Financial Organizations Through Distributed Analytics: GoodData at Finovate 2016

Good Data - 12 min 13 sec ago

On September 8‒9, 2016, about 1,600 executives, analysts, venture capitalists, and entrepreneurs from across the financial industry will meet in New York for Finovate, the only conference series focused exclusively on showcasing the best and most innovative new financial and banking technologies. For two days, attendees will enjoy live demos of the latest financial and banking technologies as well as high-impact networking sessions.

What makes Finovate truly unique is not only the subject matter, but also the experience: no keynote speakers, no expert panels, just rapid-fire seven-minute demos of the latest in financial and banking technologies.

And yes, GoodData will be there! The team and I look forward to demonstrating how large financial services and payment processing companies can distribute valuable data and analytics to branch managers, agents, merchants and external partners to help them personalize sales, improve consumer loyalty and turn data into a profit center within their B2B network.

I’ll be on stage to demo our Financial Services solution on Thursday September 8 at 2:50pm EDT. Be sure stop by our booth in the Networking Hall, or fill out this form to schedule a meeting with our team.

See you in the Big Apple!

Categories: Companies

Power to the (Business) People: How The GoodData Platform turns Data into a Profit Center

Good Data - 12 min 13 sec ago

Once upon a time, business intelligence was just about generating charts, graphs and reports. Then analytics — trending, predictive, comparative — came along. Now, as the Third Wave of Business Analytics, or “BI 3.0” begins to take shape, we can reflect on how whether the discipline has delivered on its promise … and which gaps remain to be filled.

One area that most BI vendors continue to fail to deliver on adequately is the “democracy gap.” Since its inception, business intelligence has delivered unprecedented visibility into the past, present, and future … but only to the “analytically elite” — the analysts, the power users, and the Excel junkies. As competitive pressures escalate, we need to release analytics from this silo and make it a part of the enterprise’s culture on all levels, especially in B2B industries.

The democratization of analytics is at the heart of GoodData’s platform. Our latest eBook, Going Beyond the Data: Analytics for the Masses, gives not only a technical overview of our platform but also a roadmap for transforming your data and analytics into a true net-new revenue generating profit center.

If Enterprise Data Monetization is to succeed, organizations have to get resources and insights into the hands of the people who need them, and offer a more accessible way to consume and interact with the final data product. Distributing targeted analytics to each participant — including customers, partners, and distributed stakeholders — will drive greater value throughout the entire business network.

This report details how GoodData’s platform and expertise enable customers to deliver contextually and semantically aware "Smart Business Applications" that bring data and analytics to the applications where work is actually done, through three services:

  • The Distribution Service provisions, manages, and monitors analytic environments for each network member, ensuring the highest levels of security, performance, and scalability without sacrificing manageability.
  • The Analytics Service enables business users to engage with strategic analytics and operational reporting from their business network and easily explore the data to resolve unanswered business questions.
  • The Connected Insights Service enables the “network effect,” yielding greater understanding of external influences as well as operational and strategic performance through benchmarking of business network members to drive revenue.

To learn more, download a complimentary copy of the white paper Going Beyond the Data: Analytics for the Masses.

Categories: Companies

Nucleus Research: Business Users Demand Embedded Analytics

Good Data - 12 min 13 sec ago

Did you know that in the next five years, 90 percent of analytics solutions for business users will be embedded in other core applications?

That’s what Nucleus Research reports in their latest data and analytics research note, The Evolution of Embedded Analytics. As the “democratization of data” places analytics in the hands of business users across the organization, the demand for embedded, visual, easily digested information is on the rise.

Business users such as sales teams demand the data required to make better decisions, but have little desire to toggle back and forth between apps or to interact with a complex dedicated analytics tool. Embedding data into core applications helps the organization on three levels:

  • Adoption: For an analytics tool to deliver ROI, it must be used. Since analyzing data is a low priority for business users, convenience and ease of use is a must. Embedded analytics offers them the easy, in-context accessibility needed to facilitate adoption of the app.
  • Context: Embedding analytics enables users to approach an analysis with better understanding of how a specific insight can help them.
  • Productivity: Nucleus discovered that toggling between applications a primary application and a standalone analytics application can take up as much as 1 to 2 hours of an employee’s time per week. Embedded analytics allows users to incorporate analytics into their daily activities without adding another task into their day.

So, what’s next? According to Nucleus, embedded analytics will play an ever larger role in the daily lives of employees at all levels of the organization. “In the next 7 years,” the report concludes, “90 percent of business users will interact with analytics at least once per day but only 15 percent will realize it.”

To learn more about why embedded analytics are the future of organizational data, download the Nucleus Research paper here.

Categories: Companies

Eckerson Unveils the Biggest Benefit of Multi-Tenant Cloud BI Solutions

Good Data - 12 min 13 sec ago

Most of us are familiar with the benefits of moving business intelligence (BI) and data warehousing environments to the cloud: speeding deployments, avoiding capital expenditures on hardware infrastructure, simplifying software upgrades, and minimizing the need for IT involvement.

In a new white paper, my colleague Wayne Eckerson, founder and principal consultant of Eckerson Group, reveals the most impactful benefit of cloud BI solutions: the ability to cascade virtual BI deployments to internal and external networks of organizations and users. As Wayne puts it,

Externally, these networks make it possible for organizations to enrich customer, partner, and supplier relationships by supplying complete, interactive and self-service BI environments rather than static PDF reports or data dumps typical of current extranet reporting solutions. These data monetization networks will enable organizations to improve customer service and stickiness, increase revenue or generate new revenue streams, and fully monetize their data assets.

Cloud deployments allow companies to create unique BI instances for each business unit, division, or department, each of which can generate a new BI instance for each of its internal groups (if permitted). Separate BI environments can also be created for each member of its external network, including customers, suppliers, and partners.

Eckerson’s white paper goes on to explore tangential benefits of BI cascading, including

  • Balancing centralized governance with local flexibility
  • Increasing customer satisfaction and the value of existing products and services
  • Generating entirely new revenue streams

To learn more, download a complimentary copy of the white paper Data Monetization Networks: The Real Value of Multi-Tenant Business Intelligence

Categories: Companies

Monetizing Data in the Insurance Industry

Good Data - 12 min 13 sec ago

Data has played a vital role in the insurance business since the industry’s earliest days, with actuaries using advanced statistical analyses to assess and monetize risk. Today insurance companies are experiencing an exponential increase in data, which, if applied strategically and effectively, offers significant competitive advantages and monetization opportunities.

On September 15, chief data officers from across the insurance industry will meet in Chicago to explore the evolving opportunities surrounding big data and the importance of championing analytics at the enterprise level. The Chief Data Officer Forum Insurance 2016 will feature more than 20 speakers presenting on topics including

  • Price optimization
  • Fraud analytics
  • Predictive modeling
  • Customer data management
  • Disruptive innovation
  • Embedding a data culture within your organization
  • Data quality
  • And many more

Stephanie Burton and I will be on site at the GoodData table in the expo hall. Meet us on site and learn how distributed analytics empowers organizations in the insurance industry to commercialize and monetize their existing data Register here.

See you there!

Categories: Companies

How to Develop and Implement a Successful Data Product Launch Plan

Good Data - 12 min 13 sec ago

What happens after you’ve built your data product? Well, you hope to get it into the hands of your customers. But if the first time you’re thinking about the launch plan for your product is after it is already built, then it is already too late.

Your go-to-market (GTM) and launch planning needs to be started alongside your product development, so that you can execute on it when you have your product ready for pilot, Beta or general availability.

While there are a lot of parts to a GTM and launch plan, the key aspects are listed below, and we’ve aligned them in stages to fit in with when your product is built and ready for launch.

Pre-Launch
  • Define value proposition and create a positioning statement for your data product. In this statement, explain the target market and the benefit to your customers; for example, “‘Our Analytics Product’ empowers pharmaceutical manufacturers to analyze transactions and change their processes to improve margins.” Creating this positioning statement allows everyone else in your organization who hasn’t been a part of the product development effort to understand the product and its value and build other key messaging in marketing initiatives.
  • Define the metrics for your launch and assign goals so you can track if your launch is successful. Your metrics can be around reach of your launch, adoption of your product, additional revenue you hope to gain with the launch or customer satisfaction if the goal of your data product is to improve your customer’s experience.
  • Collaborate with your marketing teams to build collateral for your prospects. Use your positioning statement to drive the material. This could be one-pagers to highlight the core value your customers will get from using the data product, demos built for your target audience or case studies to highlight how other customers have already leveraged your data product.
  • Enable and train your sales teams so they are familiar with the data product you’ve built, and can communicate with prospects about how and why it would be valuable to them along with your core product or service. It is especially useful to have a demo version built out for your sales team can leverage.
  • If your organization relies on implementation or support teams, this is the time to educate them on your data product and train them on what you’ve built, the value of it and how you expect customers to get up and running with it. Also, define the support process during this stage. Who is responsible for first response? When would you escalate to GoodData support? If you have this process outlined before launching, it definitely saves you from having to put out fires later on!
Launch

While your value proposition and positioning will help you understand who your target audience is and the channels you’ll be using to target them, you will still need to pick a launch date and prepare for activities leading up to and during your launch period so that you can engage these target customers. The strategy you pick for your beta product can be different from the one you employ for general launch. Depending on your marketing strategy, you may decide to include the following activities in your launch:

  • Webinars for prospects or webinars along with customers already using your data product
  • Blog posts and articles
  • Targeted email campaigns with curated content or offers to trial the data product for a period of time
  • Live event - our customers have pitched their data products at different industry conferences or customer workshops

We also recommend creating a launch plan and keeping these dates in mind as you work on building your data product. Here are some examples of a high-level and detailed plan:

 

 

 

 

Post Launch

After your initial launch is complete, continue activities post launch as well to keep the interest high amongst your prospects and customers. Also, if you did a beta launch and are going to plan for the full product launch later on, this is a good time to analyze the results of your beta launch and use them to influence your strategy.

  • Continue your marketing efforts by including content on your data product in your organization’s content strategy and cadence. Refresh case studies and customer webinars frequently.
  • Track progress against your metrics and refine your launch strategy. Are you reaching the customers you had wanted to? Are you hitting the adoption and revenue goals you had planned for? If not, think about how to refine your launch strategy for the next release or how you can continue to reach your target with ongoing activities.
  • Finally, collect and analyze customer feedback post launch so you can use it to influence your product roadmap. Its also important to set up a mechanism to collect this feedback frequently. Will you be gathering feedback via surveys, customer focus groups, online community or from your support team? If your organization relies on an implementation team to set up your product and the data product for your customers, then create a process to gain feedback from this team as well.
Categories: Companies

The Keys to Building Diverse and Inclusive Teams

Good Data - 12 min 13 sec ago

Study after study has shown that an environment fueled by diverse, inclusive teams is key to enhancing innovation, inspiring creative problem solving, and fostering greater agility in adapting to the changing needs of today’s business environment. Here at GoodData, we’re proud to promote diversity and inclusion — not only because it’s good for our bottom line, but because it’s the right thing to do.

As GoodData’s Chief People Officer, I’ve had the pleasure of being deeply involved in our initiative to foster diversity in our workforce. We built this initiative around five key success drivers:

Truly Listen

Create an environment where people feel comfortable sharing candid feedback through engagement surveys, focus groups, and other opportunities.

Lock in Leadership Engagement and Ownership

Recognize that when programs have the backing of your leaders, they get more traction across all areas of the company.

Enlist and Empower Champions

Recognize and amplify the passion of people who believe in the values of the company, its culture, and its products.

Build a Shared Vision

Once you have your champions in place, pick one or two key areas of impact and agree on a vision with clear success metrics that allow you to monitor your progress.

Unleash the Power of Community

Allow your initiatives to be driven by community, which offers two advantages:

  • Generating energy and momentum for the initiative
  • Building a vibrant culture where people connect on things they’re truly passionate about

To learn more about our diversity initiatives at GoodData, please check out the video below:

 
Categories: Companies

Thoughts on Choosing the Right Big Data Solution

Good Data - 12 min 13 sec ago

Drop in on any conversation between CIOs and CDOs, and chances are you’ll hear the term “big data” pop up more than once. As enterprises strive to deal with the increasingly overwhelming volume, velocity, and variety of their data, specialized solutions are becoming essential. But how do you decide which solution is right for your organization?

That’s the issue Datamation’s Cynthia Harvey tackles in her latest article, “Comparing Big Data Solutions.” Harvey advises readers to begin addressing this task with the most basic question, “Do I even need this?”

If the answer is “yes” (and it probably is), she recommends discussing the need with existing vendor-partners, breaking the massive undertaking into smaller projects, and considering the advantages of cloud-based solutions over hosting on premises.

Harvey then goes on to ask a series of experts to offer their best tips — including GoodData’s CEO and founder, Roman Stanek, who advises readers to focus on their business objectives:

"Many customers get into a feature and functionality bake-off, when in reality you need to think about how you are going to partner with a vendor to ensure your success in bringing an analytics offering to market," explains Roman Stanek, CEO and founder at GoodData. He adds, "As opposed to thinking strictly about individual features, consider the wealth of expertise and knowledge a vendor can bring to your partnership."

Stanek says that the most important question a company can ask their big data vendor is "How are you going to help me or allow me to create value from my data assets?" In addition, he advises, "Consider how you are going to productize the analytics solution to turn it into a profit center for your business. Work backwards as you would with any new product or feature you are going to introduce to your product portfolio."

Other expert tips include looking for scalability, ensuring that the solution can handle many data types, and leveraging existing investments. Harvey collected some interesting insights from a diverse group of thought leaders, and I encourage you to take a look.

Click here to enjoy the whole article.

Categories: Companies

GoodData Welcomes Lars Farnstrom to Help Drive Global Expansion

Good Data - 12 min 13 sec ago

As a key part of our continuing global expansion effort, GoodData is proud to welcome Lars Farnstrom as our first sales director for Europe, the Middle East and Africa.

Lars brings with him more than 20 years of experience with enterprise software in a variety of senior positions ranging from implementation to marketing and sales. Prior to joining GoodData, Lars worked at Insidesales as sales director for EMEA for the predictive forecasting and analytics product line that Insidesales acquired from C9. In addition, Lars has worked for BOARD International and Siebel Systems. Lars credits GoodData's mission to change the economics of data by transforming it from a cost-center into a revenue-producing profit center as a key reason he joined the team, and he’s looking forward to helping GoodData deliver the net-new value of data to the EMEA market.

“Enterprise data monetization is a very hot topic in Europe right now, and there’s a real market need for robust, proven solutions to support it,” Lars said. “GoodData not only has EU-ready servers and services, but now we have actual sales boots on the ground and I’m thrilled to be a part of the team!”

Lars joins GoodData during a time of rapid international business expansion for our company. As of 2016, nearly 50 percent of our end users access our platform from outside of the United States. To support these international customers and their growing data security requirements, as well as grow new international business, we have expanded our global sales organization and are now in the process of opening our second international data center.

“We’ve listened to the requirements our customers have for GoodData to continue enhancing its global infrastructure and support, said Blaine Mathieu, Chief Marketing and Product Officer at GoodData. “And we are confident that these investments, coupled with Lars’ leadership, will further drive GoodData’s growth.”

Categories: Companies