Skip to main content
  1. Blog
  2. Article

Canonical
on 31 August 2010

Building Apps for the Cloud: How KnowledgeTree Used Ubuntu for Rapid Development of Its SaaS Offering


Would you like to find out about how Ubuntu is being deployed in the cloud space? Would you like to see how KnowledgeTree uses Ubuntu for its SaaS offering? If so, please join KnowledgeTree and Canonical on Wednesday 8 September 2010 at 11 am Pacific (2 pm Eastern) for a joint webinar.

Enjoy an informative and thought provoking talk from Evan Person, Director of Product for KnowledgeTree and Renen Watermeyer, Director of Engineering for KnowledgeTree where they will discuss:

  • The criteria KnowledgeTree considered when choosing an OS for the cloud
  • How Ubuntu met those criteria and was subsequently selected
  • How using Ubuntu contributed to the way the service was built
  • Lessons learned in the process of developing on Ubuntu for the cloud

Register to attend this informative event.

Related posts


Johann Wolf
27 April 2026

Why Web Engineering is great

Ubuntu Article

Like many software engineers, one of my first software development experiences started with creating my own web page. Since that time 20+ years ago, a lot has changed in the web landscape. Having worked a lot in web since then, I’d like to take a moment to reflect on what I think makes web great! ...


Ishani Ghoshal
27 April 2026

Ubuntu 16.04 LTS has reached the end of standard Expanded Security Maintenance with Ubuntu Pro. Here are your options.

Ubuntu Article

Ubuntu 16.04 LTS (Xenial Xerus) reached the end of its five-year Expanded Security Maintenance (ESM) window in April 2026. If you are still running 16.04, it is critical to address your support status to ensure continued security and compliance. Your support options Now that 16.04 is in its Legacy phase, you have two primary paths: ...


Rob Gibbon
27 April 2026

Understanding disaggregated GenAI model serving with llm-d

AI Article

What is llm-d? llm-d is an open source solution for managing high-scale, high-performance Large Language Model (LLM) deployments. LLMs are at the heart of generative AI – so when you chat with ChatGPT or Gemini, you’re talking to an LLM. Simple LLM deployments – where an LLM is deployed to a single server – can ...