Skip to content

Feed aggregator

Australia’s Cloud Market is Targeted to hit $4.55 Billion? That’s the buzz at Salesforce World Tour in Melbourne!

CipherCloud Blog - Mon, 03/23/2015 - 17:45

Recently Salesforce hosted its largest Australian technology conference of the year, Salesforce World Tour, in Melbourne, Australia!  Our team was there representing CipherCloud at our booth and they all had an absolute blast!  If you’ve never been to a Salesforce event, I highly recommend going.  Salesforce events are packed with informative speakers, innovative partners, as […]

The post Australia’s Cloud Market is Targeted to hit $4.55 Billion? That’s the buzz at Salesforce World Tour in Melbourne! appeared first on CipherCloud.

Categories: Companies

CISOs Talk Incident Response: 3 Steps to Breach Readiness

CipherCloud Blog - Fri, 03/20/2015 - 11:26

This is Part 1 in a series, to read Part 2, see: CISOs Talk Incident Response: Educating Your Board Recently I had a pleasure of attending a CISO networking dinner. The event drew high profile CISOs from large financial institutions, healthcare companies, retailers, education, and hospitality industries. Incident response was THE topic of the evening. […]

The post CISOs Talk Incident Response: 3 Steps to Breach Readiness appeared first on CipherCloud.

Categories: Companies

Best Practices in Financial Services Cloud Adoption Strategies

CipherCloud Blog - Wed, 03/18/2015 - 10:37

At long last, the financial services industry is embracing cloud computing, but cloud adoption in the financial sector hasn’t yet accelerated to its full potential. Many firms haven’t yet found a cloud security platform or strategy that’s mature and robust enough to meet the stringent data privacy and security requirements that financial services firms must […]

The post Best Practices in Financial Services Cloud Adoption Strategies appeared first on CipherCloud.

Categories: Companies

Windows Server 2012 R2, IIS 8.5, WebSockets and .NET 4.5.2

AppHarbor Blog - Git-enabled .NET PaaS - Wed, 09/24/2014 - 16:24

During the last couple of weeks we've upgraded worker servers in the US and EU regions to support Windows Server 2012 R2, IIS 8.5 and .NET 4.5.2. Major upgrades like this can be a risky and lead to compatibility issues, and the upgrade was carefully planned and executed to maximize compatibility with running applications. Application performance and error rates have been closely monitored throughout the process and fortunately, chances are you haven't noticed a thing: We've detected migration-related issues with less than 0.1% of running applications.

Many of the new features and configuration improvements enabled by this upgrade will be gradually introduced over the coming months. This way we can ensure a continued painless migration and maintain compatibility with the previous Windows Server 2008 R2/IIS 7.5 setup, while we iron out any unexpected kinks if and when they crop up. A few changes have however already been deployed that we wanted to fill you in on.

WebSocket support and the beta region

Last year the beta region featuring experimental WS2012 and WebSockets support was introduced. The beta region allowed customers to test existing and new apps on the new setup while we prepared and optimized it for production use. This approach has been an important factor in learning about subtle differences between the server versions, and addressing pretty much all compatibility issues before upgrading the production regions. Thanks to all the customers who provided valuable feedback during the beta and helped ensure a smoother transition for everyone.

An important reason for the server upgrade was to support WebSocket connections. Now that the worker servers are running WS2012 and IIS 8.5 we've started doing just that. Applications in the old beta region have been merged into the production US region and the beta region is no longer available when you create a new application.

Most load balancers already support WebSockets and the upgrade is currently being rolled out to remaining load balancers. Apps created since August 14th fully support WebSockets and no configuration is necessary: AppHarbor will simply detect and proxy connections as expected when a client requests a Connection: Upgrade.

Some libraries, such as SignalR, will automatically detect and prefer WebSocket connections when supported by both the server and client. Until WebSocket connections are supported on all load balancers some apps may attempt and fail during the WebSocket handshake. This should not cause issues since these libraries will fall back to other supported transports, and affected apps will automatically be WebSocket-enabled when supported by the load balancers.

CPU throttling

One of the major challenges that has held back this upgrade is a change in the way we throttle worker CPU usage. CPU limitations are the same as before, but the change can affect how certain CPU-intensive tasks are executed. Resources and documentation on this subject are limited, but testing shows that CPU time is more evenly scheduled across threads, leading to higher concurrency, consistency and stability within processes. While this is overall an improvement it can also affect peak performance on individual threads, and we're currently investigating various approaches to better support workloads affected by this.

For the curious, we previously used a CPU rate limit registry setting to limit CPU usage per user account, but this is no longer supported on Windows Server 2012. We now use a combination of IIS 8's built-in CPU throttling and a new CPU rate control for job objects to throttle background workers.

If you've experienced any issues with this upgrade or have feedback about the process, please don't hesitate to reach out.

Categories: Companies

Heartbleed Security Update

AppHarbor Blog - Git-enabled .NET PaaS - Wed, 09/24/2014 - 16:24

Updated on April 10, 2014 with further precautionary steps in the "What can you do" section below.

On April 7, 2014, a serious vulnerability in the OpenSSL library (CVE-2014-0160) was publicly disclosed. OpenSSL is a cryptography library used for the majority of private communications across the internet.

The vulnerability, nicknamed "Heartbleed", would allow an attacker to steal secret certificates keys, names and passwords of users and other secrets encrypted using the OpenSSL library. As such it represents a major risk for a large number of internet application and services, including AppHarbor.

What has AppHarbor done about this

AppHarbor responded to the announcement by immediately taking steps to remediate the vulnerability:

  1. We updated all affected components with the updated, secure version of OpenSSL within the first few hours of the announcement. This included SSL endpoints and load balancers, as well as other infrastructure components used internally at AppHarbor.
  2. We re-keyed and redeployed all potentially affected AppHarbor SSL certificates (including the piggyback *.apphb.com certificate), and the old certificates are being revoked.
  3. We notified customers with custom SSL certificates last night, so they could take steps to re-key and reissue certificates, and have the old ones revoked.
  4. We reset internal credentials and passwords.
  5. User session cookies were revoked, requiring all users to sign in again.

Furthermore, AppHarbor validates session cookies against your previously known IP addresses as part of the authorization process. This has reduced the risk of a stolen session cookie being abused. Perfect forward secrecy was deployed to some load balancers, making it impossible to read intercepted and encrypted communication with stolen keys. Forward secrecy has since been deployed to all load balancers hosted by AppHarbor.

What can you do

We have found no indication that the vulnerability was used to attack AppHarbor. By quickly responding to the issue and taking the steps mentioned above we effectively stopped any further risk of exposure. However, due to the nature of this bug, we recommend users who want to be extra cautious to take the following steps:

  1. Reset your AppHarbor password.
  2. Review the sign-in and activity history on your user page for any suspicious activity.
  3. Revoke authorizations for external applications that integrates with AppHarbor.
  4. Recreate, reissue and reinstall custom SSL certificates you may have installed, and revoke the old ones. Doing this may revoke the old certificates, so make sure you're ready to install the new certificates.
  5. Read the details about the Heartbleed bug here and assess the risks relative to your content.

Updated instructions (April 10, 2014):

While we still have not seen any abuse on AppHarbor as a result of this bug, we now also encourage you to take these precautionary steps:

  1. Reset your build URL token.
  2. If you're using one of the SQL Server or MySQL add-ons: Reset the database password. Go to the add-on's admin page and click the "Reset Password" button. This will immediately update the configuration on AppHarbor and redeploy the application (with a short period of downtime until it is redeployed).
  3. If you're using the Memcacher add-on: Reinstall the add-on by uninstalling and installing it.
  4. Rotate/update sensitive information in your own configuration variables.

If you have hardcoded passwords/connection strings for any your add-ons this is a good opportunity to start using the injected configuration variables. You can find instructions for the SQL add-ons here and the Memcacher add-on here. This way your application is automatically updated when you reset the add-ons, or when an add-on provider updates the configuration. If this is not an option you should immediately update your code/configuration files and redeploy the application after the configuration is updated.

Stay tuned

Protecting your code and data is our top priority, and we continue to remediate and asses the risks in response to this issue. We'll keep you posted with any new developments, so stay tuned on Twitter and the blog for important updates. We're of course also standing by on the support forums if you have any questions or concerns.

Categories: Companies

Librato integration and built-in perfomance metrics

AppHarbor Blog - Git-enabled .NET PaaS - Wed, 09/24/2014 - 16:24

Librato Dashboard

Being able to monitor and analyze key application metrics is an essential part of developing stable, performant and high quality web services that meets you business requirements. Today we’re announcing a great new set of features to provide a turnkey solution for visualizing, analyzing and acting on key performance metrics. On top of that we’re enabling you to easily track your own operational metrics. In this blog post we’ll look at how the pieces tie together.

Librato integration

The best part of today’s release is our new integration with Librato for monitoring and analyzing metrics. Librato is an awesome and incredibly useful service that enables you to easily visualize and correlate metrics, including the new log-based performance metrics provided by AppHarbor (described in more details below).

Librato Dashboard

Librato is now available as an add-on and integrates seamlessly with your AppHarbor logs. When you provision the add-on, Librato will setup a preconfigured dashboard tailored for displaying AppHarbor performance data and you can access it immediately by going to the Librato admin page. Everything will work out of the box without any further configuration and your logs will automatically be sent to Librato using a log drain.

When log messages containing metric data are sent to Librato they’re transformed by an l2met service before being sent to their regular API. A very cool feature of the l2met service is that it can automatically calculate some useful metrics. For instance, it’ll calculate the median response time as well as the the 99th and 95th percentile of measurements such as response times. The perc99 response time means the response time of the 99% fastest responses. It can be useful to know this value since it's less affected by a few very slow responses than the average. Among other things this provides a good measurement of the browsing experience for most of your users.

Librato Dashboard

The l2met project was started by Ryan Smith - a big shout-out and thanks to him and the Librato team for developing this great tool.

For more information about how to integrate with Librato and details about the service please refer to the documentation here. Also check out their announcement blog post about the integration.

Built-in performance metrics

AppHarbor can now write key runtime performance metrics directly to your application’s log stream as l2met 2.0 formatted messages similar to this:

source=web.5 sample#memory.private_bytes=701091840
source=web.5 sample#process.handles=2597
source=web.5 sample#cpu.load_average=1.97

These are the messages Librato uses as well and most of them are written every 20 seconds. They allow for real-time monitoring of worker-specific runtime metrics such as CPU (load average) and memory usage, as well as measurements of response time and size reported from the load balancers. Because these metrics are logged to your log stream you can also consume them in the same way you’d usually view or integrate with your logs.

Load average run-time metrics

Performance data collection takes place completely out-of-process, without using a profiler, and it can be enabled and disabled without redeploying the application. This means that monitoring won’t impact application performance at all and that a profiler (such as New Relic) can still be attached to the application.

Writing custom metrics

The performance data provided by AppHarbor is probably not the only metrics you want to track. You can of course integrate directly with Librato’s API, but the l2met integration makes it easier than ever to track your own metrics, and the paid Librato plans includes the ability to track custom metrics exactly for that purpose.

You can start writing your own metrics simply by sending an l2met-formatted string to your logs. Last week we introduced the Trace Logging feature which is perfect for this, so writing your custom metrics can now be done with a simple trace:

Trace.TraceInformation(“measure#twitter.lookup.time=433”);

To make this even easier we’ve built the metric-reporter library (a .NET port of Librato’s log-reporter) to provide an easy to use interface for writing metrics to your log stream. You can install it with NuGet:

Install-Package MetricReporter

Then initialize a MetricReporter which writes to a text writer:

var writer = new L2MetWriter(new TraceTextWriter);
var reporter = new MetricReporter(metricWriter);

And start tracking your own custom metrics:

reporter.Increment("jobs.completed");
reporter.Measure("payload.size", 21276);
reporter.Measure("twitter.lookup.time", () =>
{
    //Do work
    twitterRequest.GetResponse();
});

On Librato you can then view charts with these new metrics along with the performance metrics provided by AppHarbor, and add them to your dashboards, aggregate and correlate data, set up alerts etc. The MetricReporter library will take care of writing l2met-formatted metrics using the appropriate metric types and write to the trace or another IO stream. Make sure to inspect the README for more examples and information on configuration and usage.

That’s all we have for today. There’ll be more examples on how you can use these new features soon, but for now we encourage you to take it for a spin, install the Librato add-on and test the waters for yourself. We’d love to hear what you think so if there are other metrics you’d like to see or if you experience any issues please hit us up through the usual channels.

Categories: Companies

Introducing Trace Logging

AppHarbor Blog - Git-enabled .NET PaaS - Wed, 09/24/2014 - 16:24

Today we’re happy to introduce trace message integration with your application log. With tracing you can very easily log trace messages to your application's log stream by using the built-in tracing capabilities of the .NET framework from anywhere in your application.

When introducing the realtime logging module a while back we opened up access to collated log data from load balancers, the build and deploy infrastructure, background workers and more. Notably missing however was the ability to log from web workers. We’re closing that gap with tracing, which can be used in both background and web workers.

How to use it

The trace feature integrates with standard .NET tracing, so you don’t have to make any changes to your application to use it. You can simply log traces from your workers with the System.Diagnostics.Trace class:

Trace.TraceInformation("Hello world");

This will yield a log message containing a timestamp and the source of the trace in your application’s log like so:

2014-01-22T06:46:48.086+00:00 app web.1 Hello World

You can also use a TraceSource by specifying the trace source name AppHarborTraceSource:

var traceSource = new TraceSource("AppHarborTraceSource", defaultLevel: SourceLevels.All);
traceSource.TraceEvent(TraceEventType.Critical, 0, "Foo");

You may not always want noisy trace messages in your logs and you can configure the trace level on the "Logging" page. There are 4 levels: All, Warning, Error and None. Setting the trace level will update the configuration without redeploying or restarting the application. This is often desirable if you need to turn on tracing when debugging and diagnosing an ongoing or state-related issue.

Configure Trace level

There are a number of other ways to use the new tracing feature including:

  • ASP.NET health monitoring (for logging exceptions, application lifecycle events etc).
  • A logging library such as NLog (Trace target) or log4net (TraceAppender).
  • Integrating with ETW (Event Tracing for Windows) directly using the injected event provider id.

Anything that integrates with .NET tracing or ETW should work, and you can find more details and examples in this knowledge base article.

All new applications have tracing enabled by default. Tracing can be enabled for existing applications on the "Logging" page.

How does it work

Under the hood we’re using ETW for delivering log messages to the components that are responsible for sending traces to your log stream. Application performance is unaffected by the delivery of log messages as this takes place completely out of process. Note however that messages are buffered for about a second and that some messages may be dropped if you’re writing excessively to the trace output.

When tracing is enabled, AppHarbor configures your application with an EventProviderTraceListener as a default trace listener. While you can integrate directly with ETW as well we recommend using the Trace or TraceSource approaches described above.

Viewing trace messages

Traces are collated with other logging sources in your log stream, so you can consume them in the same way you’re used to. You can view log messages using the command line interface, the web viewer or set up a log drain to any HTTP, HTTPS or syslog endpoint. For more information about the various integration points please refer to this article.

Viewing trace messages in console

We’ve got a couple of cool features that builds on this ready soon, so stay tuned and happy tracing!

Categories: Companies

.NET 4.5.1 is ready

AppHarbor Blog - Git-enabled .NET PaaS - Wed, 09/24/2014 - 16:24

Microsoft released .NET 4.5.1 a while back, bringing a bunch of performance improvements and new features to the framework. Check out the announcement for the details.

Over the past few weeks we have updated our build infrastructure and application servers to support this release. We're happy to report that AppHarbor now supports building, testing and running applications targeting the .NET 4.5.1 framework, as well as solutions created with Visual Studio 2013 and ASP.NET MVC 5 applications.

There are no known issues related to this release. If you encounter problems, please refer to the usual support channels and we'll help you out.

.NET logo

Categories: Companies

Integrated NuGet Package Restore

AppHarbor Blog - Git-enabled .NET PaaS - Wed, 09/24/2014 - 16:24

A few months ago the NuGet team released NuGet 2.7, which introduced a new approach to package restore. We recently updated the AppHarbor build process to adopt this approach and integrate the new NuGet restore command. AppHarbor will now automatically invoke package restore before building your solution.

Automatically restoring packages is a recommended practice, especially because you don’t have to commit the packages to your repository and can keep the footprint small. Until now we’ve recommended using the approach desribed in this blog post to restore NuGet packages when building your application. This has worked relatively well, but it’s also a bit of a hack and has a few caveats:

  • Some NuGet packages rely files that needs to be present and imported when MSBuild is invoked. This has most notably been an issue for applications relying on the Microsoft.Bcl.Build package for the reasons outlined in this article.
  • NuGet.exe has to be committed and maintained with the repository and project and solution files needs to be configured.
  • Package restore can intermittently fail in some cases when multiple projects are built concurrently.

With this release we expect to eliminate these issues and provide a more stable, efficient and streamlined way of handling package restore.

If necessary, NuGet can be configured by adding a NuGet.config file in the same directory as your solution file (or alternatively in a .nuget folder under your solution directory). You usually don't have to configure anything if you’re only using the official NuGet feed, but you’ll need to configure your application if it relies on other package sources. You can find an example configuration file which adds a private package source in the knowledge base article about package restore and further documentation for NuGet configuration files can be found here.

If you hit any snags we’re always happy to help on our support forums.

NuGet logo

Categories: Companies

New Relic Improves Service and Reduces Price

AppHarbor Blog - Git-enabled .NET PaaS - Wed, 09/24/2014 - 16:24

New Relic

We're happy to announce that New Relic has dropped the price of the Professional add-on plan from $45/month to $19/month per worker unit. Over the years New Relic has proven to be a really useful tool for many of our customers, and we're pleased that this price drop will make the features of New Relic Professional more accessible to everyone using AppHarbor.

Highlights of the Professional plan include:

  • Unlimited data retention
  • Real User Monitoring (RUM) and browser transaction tracing
  • Application transaction tracing, including Key Transactions and Cross Application Tracing
  • Advanced SQL and slow SQL analysis

You can find more information about the benefits of New Relic Pro on the New Relic website (http://newrelic.com/pricing/details).

Service update

The New Relic agent was recently upgraded to a newer version which brings support for some recently introduced features as well as a bunch of bug fixes. Time spent in the request queue is now reported and exposed directly in the New Relic interface. Requests are rarely queued for longer than a few milliseconds, but it can happen if your workers are under load. When more time is spent in the request queue it may be an indicator that you need to scale your application to handle the load efficiently.

We're also making a few changes to the way the New Relic profiler is initialized with your applications. This is particularly relevant if you've subscribed to New Relic directly rather than installing the add-on through AppHarbor. Going forward you'll need to add a NewRelic.LicenseKey configuration variable to make sure the profiler is attached to your application. We recommend that you make this change as soon as possible. If you're subscribed to the add-on through AppHarbor no action is required and the service will continue to work as usual.

Categories: Companies

Found Elasticsearch add-on available

AppHarbor Blog - Git-enabled .NET PaaS - Wed, 09/24/2014 - 16:24

Found ElasticSearch

Found provides fully hosted and managed Elasticsearch clusters; each cluster has reserved memory and storage ensuring predictable performance. The HTTPS API is developer-friendly and existing Elasticsearch libraries such as NEST, Tire, PyES and others work out of the box. The Elasticsearch API is unmodified, so for those with an existing Elasticsearch integration it is easy to get started.

For production and mission critical environments customers can opt for replication and automatic failover to a secondary site, protecting the cluster against unplanned downtime. Security has a strong focus: communication to and from the service is securely transmitted over HTTPS (SSL) and data is stored behind multiple firewalls and proxies. Clusters run in isolated containers (LXC) and customisable ACLs allow for restricting access to trusted people and hosts.

In the event of a datacenter failure, search clusters are automatically failed over to a working datacenter or, in case of a catastrophic event, completely rebuilt from backup.

Co-founder Alex Brasetvik says: "Found provides a solution for companies who are keen to use Elasticsearch but not overly keen to spend their time and money on herding servers! We provide our customers with complete cluster control: they can scale their clusters up or down at any time, according to their immediate needs. It's effortless and there's zero downtime."

More information and price plans are available on the add-on page.

Categories: Companies

Introducing Realtime Logging

AppHarbor Blog - Git-enabled .NET PaaS - Wed, 09/24/2014 - 16:24

Today we're incredibly excited to announce the public beta of our brand new logging module. Starting immediately all new applications created on AppHarbor will have logging enabled. You can enable it for your existing apps on the new "Logging" page.

We know all too well that running applications on a PaaS like AppHarbor sometimes can feel like a black box. So far we haven't had a unified, simple and efficient way to collate, present and distribute log events from the platform and your apps.

That's exactly what we wanted to address with our logging solution, and based on the amazing feedback from private beta users we feel confident that you'll find it useful for getting insight about your application and AppHarbor. A big thanks to all the beta testers who have helped us refine and test these new features.

The new logging module collates log messages from multiple sources, including almost all AppHarbor infrastructure component and your applications - API changes, load balancer request logs, build, deploy and stdout/stderr from your background workers and more can now be accessed and sent to external services in real time.

Captain's log Consider yourself lucky we're not that much into skeuomorphism

Interfaces

We're providing two interfaces "out of the box" - a convenient web-interface can be accessed on the Logging page and a new log command has been added to the CLI. [Get the installer directly from here or install with Chocolatey cinst appharborcli.install. To start a "tailing" log session with the CLI, you can for instance run appharbor log -t -s appharbor. Type appharbor log -h to see all options. Log web interface

The web interface works a bit differently, but try it out and let us know what you think - it's heavily inspired by the log.io project who have built a great client side interface for viewing, filtering, searching and splitting logs into multiple "screens".

Log web interface

Integration

One of the most useful and interesting aspects of today's release is the flexible integration points it provides. Providing access to your logs in realtime is one thing, but AppHarbor will only store the last 1500 log messages for your application. Storing, searching, viewing and indexing logs can be fairly complex and luckily many services already exists that helps you make more sense of your log data.

We've worked with Logentries to provide a completely automated and convenient way for sending AppHarbor logs to them when you add their add-on. When you add the Logentries add-on your application can automatically be configured to send logs to Logentries, and Logentries will be configured to display log messages in AppHarbor's format.

Logentries integration

You can also configure any syslog (TCP), HTTP and HTTPS endpoint you like with log "drains". You can use this to integrate with services like Loggly and Splunk, or even your own syslog server or HTTP service. More details about log drains are available in the this knowledge base article and the drain API documentation.

Finally there's a new new Log session API endpoint that you can use to create sessions similar to the ones used by the interfaces we provide.

Logplex

If you've ever used Heroku you'll find most of these features very familiar. That's no coincidence - the backend is based on Heroku's awesome distributed syslog router, Logplex. Integrating with Logplex makes it a lot easier for add-on providers who already support Heroku's Logplex to integrate with AppHarbor, while giving us a scalable and proven logging backend to support thousands of deployed apps.

Logplex is also in rapid, active development, and a big shout-out to the awesome people at Heroku who are building this incredibly elegant solution. If you're interested in learning more about Logplex we encourage you to check out the project on Github and try it for yourself. We've built a client library for interacting with Logplex's HTTP API and HTTP log endpoints from .NET apps - let us know if you'd like to use this and we'll be happy to open source the code. The Logplex documentation on stream management is also useful for a high-level overview of how Logplex works.

Next steps

With this release we've greatly improved the logging experience for our customers. We're releasing this public beta since we know it'll be useful to many of you as it is, but we're by no means finished. We want to add even more log sources, provide more information from the various infrastructure components and integrate with more add-on providers. Also note that request logs are currently only available on shared load balancers, but it will be rolled out to all load balancers soon. If you find yourself wanting some log data that is not currently available please let us know. We now have a solid foundation to provide you with the information you need when you need it, and we couldn't be more excited about that.

We'll provide you with some examples and more documentation for these new features over the next couple of weeks, but for now we hope you'll take it for a spin and test the waters for yourself. Have fun!

Categories: Companies

Introducing PageSpeed optimizations

AppHarbor Blog - Git-enabled .NET PaaS - Wed, 09/24/2014 - 16:24

Today we've introducing a new experimental feature: Google PageSpeed optimizations support. The PageSpeed module is a suite of tools that tries to optimize web page latency and bandwidth usage of your websites by rewriting your content to implement web performance best practices. Reducing the number of requests to a single domain, optimizing cache policies and compressing content can significantly improve web performance and lead to a better user experience.

With PageSpeed optimization filters we're making it easier to apply some of these best practices, and provide a solution that efficiently and effortlessly speed up your web apps. The optimizations takes place at the load balancer level and works for all web applications no matter what framework or language you use.

As an example of how this works you can inspect the HTML and resources of this blog to see some of the optimizations that are applied. Analyzing blog.appharbor.com with the online PageSpeed insights tool yields a "PageSpeed score" of 88 when enabled versus 73 when disabled. Not too bad considering it only took a click to enable it.

PageSpeed button

You can enable PageSpeed optimizations for your web application on the new "Labs" page, which can be found in the application navigation bar. The application will be configured with PageSpeed's core set of filters within a few seconds. We will then, among other things, apply these filters to your content:

When you've enabled PageSpeed we recommend that you test the application to make sure it doesn't break anything. You can also inspect the returned content in your browser and if you hit any snags simply disable PageSpeed and let support know about it. Note that only content transferred over HTTP from your domain will be processed by PageSpeed filters. To optimize HTTPS traffic you can enable SPDY support (although that is currently only enabled on dedicated load balancers and in the beta region).

We'll make more filters available later on, but for the beta we're starting out with a curated set of core filters, which are considered safe for most web applications. There are a few other cool filters we'll add support for later on - such as automatic sprite image generation and lazy-loading of images. Let us know if there are any filters in the catalog you think we should support!

Categories: Companies

Fiji Vacation Packages All Inclusive

CloudEnterprise.info - Sat, 03/22/2014 - 10:18

fiji vacation packages all inclusive

Categories: Blogs

Couples Vacation Packages

CloudEnterprise.info - Sat, 03/22/2014 - 10:18

couples vacation packages

Categories: Blogs

Cheap Beach Vacation Packages

CloudEnterprise.info - Sat, 03/22/2014 - 10:17

cheap beach vacation packages

Categories: Blogs

Universal Studios Vacation Packages

CloudEnterprise.info - Sat, 03/22/2014 - 10:17

universal studios vacation packages

Categories: Blogs

Bahama Vacation Packages

CloudEnterprise.info - Sat, 03/22/2014 - 10:17

bahama vacation packages

Categories: Blogs

France Vacation Packages

CloudEnterprise.info - Sat, 03/22/2014 - 10:16

france vacation packages

Categories: Blogs