The OpenNebula Project is proud to announce the first¬†agenda and line-up of speakers for the fifth¬†OpenNebula Conference¬†to be held in Cambridge, MA¬†from the 19¬†to the 20¬†of June 2017.¬†Guided by your feedback from previous edition, we included more educational and community¬†sessions to learn and do networking.Keynotes
The agenda includes four¬†keynote speakers:
- James Cuff¬†from¬†Harvard¬†FAS Research Computing will discuss¬†their strategy moving forwards and the current and existing infrastructures in place to allow for seamless provisioning of research computing.
- Justin¬†Riley from¬†Harvard¬†FAS Research Computing¬†will give a keynote¬†about how they¬†converted their¬†internal VM infrastructure from a completely home-made KVM cluster to a more robust and reliable system powered by OpenNebula and Ceph configured with public cloud integration. .
- Jack Wadden from¬†Akamai¬†will give a keynote¬†about how they use¬†OpenNebula in¬†their¬†system for saving and cloning multi-node integration test environments on-demand.
- Alfonso Aurelio Carrillo Aspiazu from Telefonica¬†will¬†give a keynote¬†about how¬†they use¬†OpenNebula and ON.Lab’s ONOS to prototype a new generation of Central Offices.
This year we will have¬†two pre-conference tutorials:
We have also¬†increased the number of educational contents with presentations from¬†the OpenNebula team showing and demoing some of the most demanded features and latest integrations.
- Orchestration of VMware Datacenters with OpenNebula
- OpenNebula Hybrid Clouds with Amazon and Azure
- Configuration Management with OpenNebula and Ansible
- Building Clouds with OpenNebula and Ceph
- Disaster Recovery and High Availability with OpenNebula
- Using Docker with OpenNebula
We had a big response to the call for presentations.¬†Thanks for submitting a talk proposal!. Although all submissions were of very high quality and merit, because this year we increased the educational contents we only have space for a few¬†community presentations. Jordi Guijarro from CSUC, ¬†Roy Keene from Knight Point and¬†Hayley Swimelar from LINBIT will discuss their experiences and integrations with OpenNebula.
We will also have two¬†Meet the Experts¬†sessions¬†providing an informal atmosphere where delegates can interact with experts who will give their undivided attention for knowledge, insight and networking; and¬†a session for 5-minute lightning talks.¬†If you would like to talk in these¬†sessions, please contact us!
Besides its amazing talks, there are multiple goodies packed with the OpenNebulaConf registration. You are still in time for getting a good price deal for tickets. There is a 20% discount until May¬†12nd.Sign Up Now
We are looking forward to welcoming you personally in Boston!
This monthly newsletter features the latest developments in the OpenNebula project, highlights from the community and the dissemination efforts carried out in the project this past month.
As you may know, the first ever US edition of the OpenNebulaConf will take place on June 19-20 in Cambridge, Massachusetts. Check out the keynotes by Akamai and Harvard, and reserve your seat in Boston! The complete agenda will be out in a few days.Technology
OpenNebula 5.4 is going to place a new mark in cloud management excellence. A wealth of new features are being now stabilized, and that is what‚Äôs keeping the development team busy these days. As you know, the vCenter driver is getting a major revamp in terms of storage and network management (check out more details in this article). These new features will enhance OpenNebula provisioning model over vCenter based infrastructures, increasing the already wide range of use cases that can be implemented with your favourite Cloud Management Platform.
Let us lay down another set of features that are recent arrivals to the release:
- Enhanced VM history logging
- Image persistency selection
- Modifiable semantics for permissions
- IPv6 non SLAAC Address Ranges
.. and there will be more before the beta is released by the end of the month!
The team is particularly excited about the scheduler new ability of creating VM Groups with roles in which you can define affinity/anti affinity between VMs and virtualization hosts. This way OpenNebula supports use cases where VMs needs to be placed together for license issues, network performance reasons (place the DB and server together for instance), computational reasons or a wide range of other use cases. As usual, a picture speaks a thousand words. This feature has been sponsored by BlackBerry in the context of the Fund a Feature program.
Check out in the project‚Äôs development portal all the things we are still working on.Community
New advancements in the community are always a pleasure to review, it is the time to notice that the OpenNebula community is as supporting and caring as always. Newcomers surely feel the warmth of the support forum, where users, developers and integrators give their best to aid users with questions and problems about OpenNebula and really interesting use cases. We‚Äôd like to show this spirit with this thread, where people show their best to help use OpenNebula with resources from a hosting company. Way to go!
It is always a pleasure learning how OpenNebula is used in a wide range of industry niches and institutions, like it use in the Turin INFN science cloud. Building reliable and useful, real world clouds is our main goal all along!.
An excellent post describing the hybrid model and why is interesting for enterprises and institutions is featured in the OpenNebula blog this month. The hybrid model in OpenNebula is a native capability, which can leverage the traditional cloud promise: infrastructure elasticity.
The idea of combining resources of public cloud providers with private depending on the terms of execution, the need of more resources, an extra protection of the data, more or less security in services with sensitive information, etc. are some of the capabilities that this model has to answer.
We are very proud to announce that OpenNebula is a key components of the recently announced Telefonica rendering of the CORD framework (R-Cord)! We believe this is an important step to develop resilient clouds to deliver residentials phone and cloud services to end users. The architecture of the solution can be found here and it is a good read for all people interested in cloud and NFVs.Outreach
The OpenNebulaConf is the perfect spot to meet up with other cloud professionals and take the temperature of the cloud computing field (check the material of 2016 edition if you want to know more). The first US edition of the OpenNebulaConf is happening this 2017, as one of two OpenNebulaConf editions this year, one in US and the other in Europe. The US edition will take place on June 19-20 in Cambridge, Massachusetts, and the European edition will be held in Madrid, Spain, on October 23-24.
June is approaching fast! Have your seat prepared in Boston? If you are interested listening at what Harvard and Akamai have to say about their use of OpenNebula in their production infrastructures, then you need to move quickly and make sure that you get a place!
This past month members of the OpenNebula team went to Prague, and did the first test on the field of the new tutorial format where the cloud is deployed within a public cloud provider. This helped a lot for people with Windows or 32 bit laptops.
The next TechDay will take place in Madrid, hosted by Telefonica,and the speakers line up look promising. Stay tuned if you are in the area since you may be interested in attending the hands-on and the talks.
The OpenNebula team is going to feature a booth in both VMworlds this year. If you are going to attend, do not forget to come by the OpenNebula booth to see a live demo of the latest stable version of your favourite CMP:
- VMworld 2017 US, August 27-31, Las Vegas, Mandalay Bay Hotel & Convention Center
- VMworld 2017 Europe, September 11-14, Barcelona, Fira Gran Via
Also, check out the list of official training from OpenNebula Systems for this year. If you are new to OpenNebula, or want to improve you knowledge with an in-depth OpenNebula admin course, those are the dates and locations you need to keep in mind.
We are looking for summer student to work on three different projects. The work location is the Argonne National Laboratory, near Chicago, Illinois. If you are interested in working with us on any of these, contact .Investigating Hadoop dynamic scaling
We are creating a platform for running geospatial analysis operations in a scalable manner using cloud computing resources. In order to support large number of users with varying workloads, the platform must dynamically manage deployments of compute resources.
Hadoop is heavily used by applications running on this platform. The purpose of this project is to study scalability patterns of a geospatial analysis application, UrbanFlow, in order to derive scaling policies that will allow to dynamically vary the number of Hadoop workers in the system to provide a good response time. Since data and locality in particular is crucial to Hadoop, this project will evaluate how data placement patterns can help or prevent dynamic scaling.
The objectives of this project are:
- Study data access and computing patterns of UrbanFlow
- Propose scaling policies using these patterns that will optimize response time for various workloads
- Develop a dynamic scaling engine that can enact such policies
Traces from existing parallel and distributed computing systems are a useful resource for researchers to replicate real-life workloads for their experiments. However, there is little material available from cloud computing systems. We propose to develop a trace archive that will provide traces from various clouds systems combined with tools to replay them. This effort originally focuses on OpenStack clouds, but would eventually include other cloud technologies.
The objectives of this project are:
- Define a cloud workload trace format after reviewing existing traces format. This format should be flexible enough to support other cloud technologies in the future.
- Develop tools to extract workload from OpenStack systems, converting into the chosen trace format.
- Develop tools to replay traces on an OpenStack deployments for experimental purposes. We will use the Chameleon testbed as a platform for deploying OpenStack.
- Create a platform (potentially reusing existing software) for hosting traces and allowing others to contribute.
We are working on a platform that seeks to combine two types of scheduling: batch/best-effort scheduling typically used in HPC datacenters and on-demand scheduling available in commercial clouds. This project is developing a meta-scheduler that switches between these different modes of scheduling to ensure meeting both user satisfaction goals (in terms of resource availability) and provider satisfaction (in terms of utilization). The overall objective of this project is to use an existing implementation and a set of traces from on-demand and batch jobs and explore different usage scenarios in this context.
The relevant tasks are as follows:
- Evaluate and potentially enhance existing implementation to add additional features
- Define and run experiments evaluating features of the resulting platform
- Contrast and compare the work with existing platforms such as Mesos
As you may already know, this year the fifth edition of¬†OpenNebulaConf is taking place in Boston¬†on June¬†19-20.¬†¬†If you are willing to attend and can save now the date you can take advantage of a¬†40% discount in your Conf tickets¬†until March 31st.Sign Up Now
Thanks to everyone who has already submitted presentations for our first event in North America. Due to several requests for an extension of the deadline for the call for presentations, we will now be accepting submissions until this Wednesday, March 22nd.Submit a Talk Proposal
We look forward to seeing you in Boston!
The Computer Physics Communications journal¬† has just made available online our latest work on SaaS+PaaS architectures for service-driven computing. This is again the result of our collaboration with the Institute of Computing Technology from the Chinese Academy of Sciences and it can be accessed here.
Markov Chain Monte Carlo (MCMC) methods are widely used in the field of simulation and modelling of materials, producing applications that require a great amount of computational resources. Cloud computing represents a seamless source for these resources in the form of HPC. However, resource over-consumption can be an important drawback, specially if the cloud provision process is not appropriately optimized. In the present contribution we propose a two-level solution that, on one hand, takes advantage of approximate computing for reducing the resource demand and on the other, uses admission control policies for guaranteeing an optimal provision to running applications.
We are happy to announce that the Nodester CLI (command line interface) has officially reached 1.0 status! ¬†At the time of this article, we are actually on v1.0.2 which now supports Node.JS version 0.8.1 by default!¬†
Here is a list of the updates:
- nodester app delete -> nodester app destroy (Because delete is reserved word in js) Almost every delete action is now destroy.
- nodester app list: Alias to nodester apps (or viceverza)
- nodester authors: Show the contributors for this tool.
- Shorthands: e.g nodester app l maps to nodester app logs.¬†
- Better help with headers and stuff.
- JSHint code lint (All code is valid js)
- Use of “use strict”
- Partial node-0.8.1
- Travis support
- Bump to 1.0.0.
- New “Hello World” app
- More useful nodester app init command. Now it let you to choose between hello world or autoudpate remote (default to nodester)
- Also no more “You need to restart your app” after npm installs
- New Api nodester client with this command you are now able to interact with your personal instance easily. Running just nodester client set <endpoint> <brand> will setup your instance. Really useful.
The new CLI also walks you through deploying your app from scratch or as an existing application. ¬†See below:
$ nodester app create 081
nodester info creating app: 081 server.js
nodester info successfully created app 081 to will run on port 19144 from server.js
nodester info run nodester app init 081 to setup this app.
nodester info ok!
$ nodester app init 081
nodester info What do you want to do:
(1) Setup a new app from scratch?
(2) You just want to setup your existent app?
note: if you choose 2 be sure that you are into your app’s dir
nodester info initializing git repo for 081 into folder 081
nodester warn this will take a second or two
nodester info cloning your new app in 081
nodester info clone complete
nodester info writing the default configuration
nodester info processing the initial commit
nodester info Nodester!
remote: ¬† ¬† ¬† ¬† ¬† ¬† ¬† ¬† ¬† ¬† ¬† ¬† ¬† _ ¬† ¬† ¬† ¬† ¬†_
remote: ¬† ¬† ¬† ¬† ¬† _ __ ¬† ___ ¬† __| | ___ ___| |_ ___ _ __
remote: ¬† ¬† ¬† ¬† ¬†| ‘_ \ / _ \ / _ ¬†|/ _ \ __| __/ _ \ ‘__|
remote: ¬† ¬† ¬† ¬† ¬†| | | | (_) | (_| | ¬†__\__ \ |_ ¬†__/ |
remote: ¬† ¬† ¬† ¬† ¬†|_| |_|\___/ \__,_|\___|___/\__\___|_| ¬†¬†
remote: ¬† ¬† ¬† ¬† ¬† Open Source Node.js Hosting Platform
remote: ¬† ¬† ¬† ¬† ¬† ¬† ¬† http://github.com/nodester
remote: Syncing repo with chroot
remote: From /node/git/topher/10991-985dc656de547235fe586e3debb0ce6b
remote: ¬†* [new branch] ¬† ¬† ¬†master ¬† ¬† -> origin/master
remote: Attempting to restart your app: 10991-985dc656de547235fe586e3debb0ce6b
remote: App restarted..
remote: ¬† \m/ Nodester out \m/
¬†* [new branch] ¬† ¬† ¬†master -> master
nodester info 081 started.
nodester info Some helpful app commands:
¬† ¬† ¬† cd ./081
¬† ¬† ¬† curl http://081.nodester.com/
¬† ¬† ¬† nodester app info
¬† ¬† ¬† nodester app logs
¬† ¬† ¬† nodester app stop|start|restart
nodester info ok!
3-2-1 BLAST OFF! ¬†Nodester now runs Node.JS version 0.8.1 by default if you upgrade to our latest CLI v1.0.2! ¬†You can upgrade from 0.4.12 or 0.6.17 to 0.8.1 by simply opening your package.json file and changing the following entry to:
Commit your change to git and push it to nodester:
git commit -am ‚Äúupgrading to v0.8.1‚ÄĚ
Hack the planet! ¬†Thanks again to @_AlejandroMG for performing this update!