Are you the publisher? Claim or contact us about this channel

Embed this content in your HTML


Report adult content:

click to rate:

Account: (login)

More Channels

Channel Catalog

Articles on this Page

(showing articles 1 to 50 of 50)
(showing articles 1 to 50 of 50)

    0 0

    Amazon AWS has recently launched ECS Fargate to “run containers without having to manage servers or clusters”.

    So this got me interested enough to patch the Jenkins ECS plugin to run Jenkins agents as containers using Fargate model instead of the previous model where you would still need to create and manage VM instances to run the containers.

    How does it work?

    With the Jenkins ECS plugin you can configure a “Cloud” item that will launch all your agents on ECS Fargate, matching jobs to different container templates using labels. This means you can have unlimited agents with no machines to manage and just pay for what you use.

    Some tips on the configuration:

    • Some options need to be configured, like subnet, security group and assign a public ip to the container in order to launch in Fargate.
    • Agents need to adhere to some predefined cpu and memory configurations. For instance for 1 vCPU you can only use 2GB to 8GB in 1GB increments.


    Price per vCPU is $0.00001406 per second ($0.0506 per hour) and per GB memory is $0.00000353 per second ($0.0127 per hour).

    If you compare the price with a m5.large instance (4 vCPU, 16 GB) that costs $0.192 per hour, it would cost you $0,4056 in Fargate, more than twice, ouch! You could build something similar and cheaper with Kubernetes using the cluster autoscaler given you can achieve a high utilization of the machines.

    While I was writing this post someone already beat me to submit a PR to the ECS plugin to add the Fargate support.

    0 0

    When I started working on Apache CXF full time it was already a well established project, shipping a production quality JAX-WS and an early JAX-RS implementations.

    During the next N years, with some short breaks, all of us did put a lot of effort into supporting the CXF community,  keeping enhancing the JAX-RS, various security features, fixing lots and lots of bugs, and trying to support the idea that "CXF was more than just a library" :-).

    I'm curious, how many SOAP or pure HTTP calls have been made over the years with the help of CXF ? Sometimes one can read: "This product supports thousands of transactions per minute". Would be fun to read somewhere that "CXF has supported several millions of service calls over 10 years" :-). Or how many downloads have been made ? Who knows...

    It is satisfying to see that today the users keep coming to CXF and ask questions and open new issues. No doubt it has helped many users, helped to completely mainstream JAX-WS and then JAX-RS, alongside its Jersey and RestEasy 'colleague' frameworks.

    No doubt the Apache CXF story will continue and I've been happy to be part of this story. Thank You !

    0 0

    Too crowded, too many queues, too little space - but also lots of friendly people, Belgian waffles, ice cream, an ASF dinner with grey beards and new people, a busy ASF booth, bumping into friends every few steps, meeting humans you see only online for an entire year or more: For me, that's the gist of this year's FOSDEM.

    Note: German version of the article including images appeared in my employer's tech blog.

    To my knowledge FOSDEM is the biggest gathering of free software people in Europe at least. It's free of charge, kindly hosted by ULB, organised by a large group of volunteers. Every year early February the FOSS community meets for two one weekend in Brussels to discuss all sorts of aspects of Free and Open Source Software Development - including community, legal, business and policy aspects. The event features more than 600 talks as well as several dozen booths by FOSS projects and FOSS friendly companies. There's several FOSDEM fringe events surrounding the event that are not located on campus. If you go to any random bar or restaurant in Brussels that weekend you are bound to bump into FOSDEM people.

    Fortunately for those not lucky enough to have made it to the event, video recordings (unfortunately in varying quality) are available online at Some highlights you might want to watch:

    One highlight for me personally this year: I cannot help but believe that I met way more faces from The Apache Software Foundation than at any other FOSDEM before. The booth was crowded at all times - Sharan Foga did a great job explaining The ASF to people. Also it's great to hear The ASF mentioned in several talks as one of the initiatives to look at to understand how to run open source projects in a sustainable fashion with an eye on longevity. It was helpful to have at least two current Apache board members (Bertrand Delacretaz as well as Rich Bowen) on site to help answer tricky questions. Last but not least it was lovely meeting several of the Apache Grey Beards (TM) for an Apache Dinner on Saturday evening. Luckily co-located with the FOSDEM HPC speaker dinner - which took a calendar conflict out of the Apache HPC people's calendar :)

    Me personally, I hope to see many more ASF people later this year in Berlin for FOSS Backstage - the advertisement sign that was located at the FOSDEM ASF booth last weekend already made it here, will you follow?

    0 0

    Seems it was nearly yesterday when I joined Talend seven years ago. Time has flown so fast... Next week I will be returning to Red Hat but first I will talk a bit about my years with Talend.

    I'd like to believe that working for Talend has helped me become a better engineer, grow in confidence.  And what about those unforgettable Talend R&D events :-) ? No doubt, it has been an interesting and exciting journey.

    It has not been easy to find a link to a piece of music which would associate well with the company, but I think I've got it in the end. The text there is a bit sombre, but the music reflects well what I'd like to remember about Talend, the energy and the style: enjoy Ave Cesaria By Stromae. Thank you Talend, Goodbye.

    And now I'll be heading back to Red Hat :-). I will be joining a WildFly Swarm team and I look forward to and optimistic about it and the new challenge. I'll have to learn new things. I will enjoy it too. And in time, after I settle well, I will return to this blog and talk about WildFly Swarm and other related projects.

    Stay Tuned !

    0 0

    This is the second in a series of blog posts on the Apache Sentry security service. The first post looked at how to get started with the Apache Sentry security service, both from scratch and via a docker image. The next logical question is how can we can define the authorization privileges held in the Sentry security service. In this post we will briefly cover what those privileges look like, and how we can query them using two different tools that ship with the Apache Sentry distribution.

    1) Apache Sentry privileges

    The Apache Sentry docker image we covered in the previous tutorial ships with a 'sentry.ini' configuration file (see here) that is used to retrieve the groups associated with a given user. A user must be a member of the "admin" group to invoke on the Apache Sentry security service, as configured in 'sentry-site.xml' (see here).  To avoid confusion, 'sentry.ini' also contains "[groups]" and "[roles]" sections, but these are not used by the Sentry security service.

    In Apache Sentry, a user is associated with one or more groups, which in turn are associated with one or more roles, which in turn are associated with one or more privileges. Privileges are made up of a number of different components that vary slightly depending on what service the privilege is associated with (e.g. Hive, Kafka, etc.). For example:

    • Host=*->Topic=test->action=ALL - This Kafka privilege grants all actions on the "test" topic on all hosts.
    • Collection=logs->action=* - This Solr privilege grants all actions on the "logs" collection.
    • Server=sqoopServer1->Connector=c1->action=* - This Sqoop privilege grants all actions on the "c1" connector on the "sqoopServer1" server.
    • Server=server1->Db=default->Table=words->Column=count->action=select - This Hive privilege grants the "select" action on the "count" column of the "words" table in the "default" database on the "server1" server.
    For more information on the Apache sentry privilege model please consult the official wiki.

    2) Querying the Apache Sentry security service using 'sentryShell'

    Follow the steps outlined in the previous tutorial to get the Apache Sentry security service up and running using either the docker image or by setting it up manually. The Apache Sentry distribution ships with a "sentryShell" command line tool that we can use to query that Apache Sentry security service. So depending on which approach you followed to install Sentry, either go to the distribution or else log into the docker container.

    We can query the roles, groups and privileges via:
    • bin/sentryShell -conf sentry-site.xml -lr
    • bin/sentryShell -conf sentry-site.xml -lg
    • bin/sentryShell -conf sentry-site.xml -lp -r admin_role
    We can create a "admin_role" role and add it to the "admin" group via:
    • bin/sentryShell -conf sentry-site.xml -cr -r admin_role
    • bin/sentryShell -conf sentry-site.xml -arg -g admin -r admin_role
    We can grant a (Hive) privilege to the "admin_role" role as follows:
    • bin/sentryShell -conf sentry-site.xml -gpr -r admin_role -p "Server=*->action=ALL"
    If we are adding a privilege for anything other than Apache Hive, we need to explicitly specify the "type", e.g.:
    • bin/sentryShell -conf sentry-site.xml -gpr -r admin_role -p "Host=*->Cluster=kafka-cluster->action=ALL" -t kafka
    • bin/sentryShell -conf sentry-site.xml -lp -r admin_role -t kafka
    3) Querying the Apache Sentry security service using 'sentryCli'

    A rather more user-friendly alternative to the 'sentryShell' is available in Apache Sentry 2.0.0. The 'sentryCli' can be started with 'bin/sentryCli'. Typing ?l lists the available commands:

    The Apache Sentry security service can be queried using any of these commands.

    0 0

    ApacheCon 2014

    I've been working at Apigee since September 2013 and one of the things I love most about my new job is the fact that I'm actively contributing to open source again.

    I'm working on Apache Usergrid (incubating), an open source Backend-As-A-Service (BaaS) that's built on the Apache Cassandra database system. Apigee uses Usergrid as part of Apigee Edge (see the Build Apps part of the docs).

    Apigee contributed code for Usergrid to the Apache Software Foundation back in October 2013 and Usergrid is now part of the Apache Incubator. The project is working towards graduating from the Incubator. That means learning the Apache way, following the processes to get a release out and most importantly, building a diverse community of contributors to build and maintain Usergrid.

    One on the most important parts of building an open source community is making it easy for people to contribute and and that's why I submitted a talk to the ApacheCon US 2014 conference (April 7-9 in Denver, CO) titled How to Contribute to Usergrid.

    The talk is intended to be a briefing for contributors, one that will lead you through building and running Usergrid locally, understanding the code-base and test infrastructure and how to get your code accepted into the Usergrid project.

    Here's the outline I have so far:

    How to Contribute to Apache Usergrid

    • Motivation
      • Why would anybody want to contribute to Usergrid?
    • First steps
      • The basics
      • Getting signed up
    • Contributing to the Stack
      • Understanding the architecture & code base
      • Building the code. Making and testing changes
      • Running Usergrid locally via launcher & via Tomcat
    • Contributing to the Portal
      • Understanding the architecture & code base
      • Building the code. Making and testing changes
      • Running the portal locally via node.js
    • Contributing to the SDKs
      • Understanding the architecture & code base
      • Building the code. Making and testing changes
    • Contributor workflow: how to get your code into Usergrid
      • For quickie drive-by code contributions
      • For more substantial code contributions
      • For documentation & website changes
    • Contributing Docs and Website changes
      • Website, wiki and GitHub pages
      • How to build the website and docs
    • Roadmap
      • First release
      • New Core Persistence system
      • The two-dot-o branch
      • Other ideas

    I'm in the process of writing this talk now so suggestions and other feedback are most welcome.

    0 0

    The next one of my 2014 Side projects that I’d like to share is Usergrid-Ember, an experiment and attempt to learn more about Ember.js and Apache Usergrid by implementing the Checkin example from my Usergrid mobile development talk. If you're interested in either Usergrid or JavaScript web development then I hope you'll read on...

    Why Ember.js?

    Ember logo

    Ember.js is one of the leading frameworks for building browser-based apps. It's one of many JavaScript Model View Controller (MVC) frameworks. Generally speaking, these frameworks let you define a set of routes or paths in your app, for example /index, /orders, /about, etc. and map each to some JavaScript code and HTML templates. Handling a route usually means using Ajax to grab some “model” data from a server and using a template to create an HTML “view” of the data that calls functions provided in a "controller" object.

    JavaScript MVC frameworks are not simple and each has its own learning curve. Is it really worth the learning time when you can do so much with a little library like jQuery? For most projects I think the answer is yes. These frameworks force you to organize your code in a logical and consistent way, which is really important as projects grow larger, and they provide features that may save you a lot of development time.

    Based on what I've seen on the net and local meet-ups, the leading frameworks these days are Ember.js and AngularJS. After I saw Yehudi Katz’s talk at All Things Open, I decided to spend some time learning Ember.js.

    Getting started with Ember.js

    The first thing you see when you visit the Ember.js site is a big button that says "DOWNLOAD THE STARTER KIT" and so that is where I started. The Starter Kit is a, a minimal Ember.js project with about twenty JavaScript, HTML and CSS files. It's a good way to start: small and simple.

    Ember.js Starter Kit files:

    screenshot of Starter Kit directory layout

    Sidebar: I do hope they keep the Starter Kit around when the new Ember-CLI tool matures. Ember-CLI generates too many magic boiler-plate files and sub-directories for somebody who is trying to understand the basics of the framework. And this is an interesting point of view: Ember-CLI is Making You Stupid by Yoni Yechezkel.

    Other stuff: Bower, Grunt and Bootstrap

    I like to bite off more than I can chew, so I decided to use a couple of other tools. I used Bower to manage dependencies and Grunt to concatenate and minify those dependencies, and other things like launching a simple web server for development purposes. I also decided to use Bootstrap to provide various UI components needed, like a navbar and nicely styled list views.

    I won't cover the details, but it was relatively easy to get Bower and Grunt working. Here are the config files in case you are interested: bower.json and Gruntfile.js. I did hit one problem: when I included Bootstrap as one of my dependencies the Glyphicons would all appear as tiny boxes, so I decided to pull Bootstrap from a CDN instead (looks like there is a fix for that now).

    Defining Index Route, Model and Template

    Every Ember.js app needs to define some routes. There is a default route for the "/" path which is called the index route, and you can add your own routes using the Router object. The snippet below shows what I needed to get started:

    Part of app.js (link)
    // create the ember app object
    App = Ember.Application.create();
    // define routes {
        this.route("login", { path: "/login" });  
        this.route("logout", { path: "/logout" });
        this.route("register", { path: "/register" });

    Ember.js will look for the JavaScript Route and Controller objects as well as the HTML template using the names above. For example: Ember.js will expect the login route to be named App.LoginRoute, the controller to be named App.LoginController and the template to be named "login."

    Let's talk about the index route. When a user arrives at your app they’ll be directed to the index route. Ember.js will then look for a JavaScript object called App.IndexRoute to provide the model data and JavaScript functions needed for the index page. Here’s a partial view of the index route:

    Part of app.js (link)
    App.IndexRoute = Ember.Route.extend( {
        // provide model data needed for index template
        model: function() {
            if ( this.loggedIn() ) {
            return [];

    The index page of the Checkin app shows the Checkin activities of the people that you follow. Above you can see how to route's model() function makes that data available to the template for display. If the user is logged in we call the store.find(“activity”) function to call the Usergrid REST API to get an array of the latest Activity objects. There is some serious Ember-Data magic going on there and I'll cover that in part two of this article.

    To display the index route, Ember looks for an HTML template called “index” and will use that template to display the index page. Below is the index template. The template is a Handlebars template and the things that appear in double curly-braces are Handlebars expressions.

    Part of index.html (link)

    In the above template you can see a couple of {{action}} expressions that call out to JavaScript methods defined in the Checkin app. The part of the code that uses the model is in the {{#each}} loop which loops through each Activity in the model and dispays an HTML list with the the item.content and item.location of each Activity.

    Here's what the above template looks like when displayed in a browser:

    screenshot of checkin app index page

    Implementing Login

    In Checkin, login is implemented using HTML Local Storage. Once a user has successfully logged in, the app stores the username and the user's access_token in Local Storage. When user arrives at the index page, we check Local Storage to see if that user is logged in and if not, we direct them to the login route, which in turn displays the login page using the template below.

    Part of index.html (link)

    The LoginController provides the functions needed by the Login page itself and there are two. There is a login() function (called on line 27 above) that performs the login, and there is a register() function (called on line 31 above) that directs the user to the New User Registration page. Here's a snippet of code from the App.LoginController that provides these two functions:

    Part of app.js (link)
    App.LoginController = Ember.Controller.extend({
      actions: {
        login: function() { 
          // login by POST to Usergrid app's /token end-point
          var loginData = {
            grant_type: "password",
            username: this.get("username"),
            password: this.get("password")
            type: "POST",
            url: Usergrid.getAppUrl() + "/token",
            data: loginData,
            context: this,
            error: function( data ) {
              // login failed, show error message
              alert( data.responseJSON.error_description );
            success: function( data ) { 
              // store access_token in local storage
              Usergrid.user = data.user;
              localStorage.setItem("username", loginData.username );
              localStorage.setItem("access_token", data.access_token );
              // clear the form
              this.set("username", ""); 
              this.set("password", "");
             // call route to handle post-login transition
        register: function() {

    The above code shows how to login to a Usergrid app using jQuery's Ajax feature. The login() function takes the username and password values from the login form, puts those in a JSON object with grant_type "password" and posts that object to the /token end-point of the Usergrid app. If that post succeeds, the response will include an access_token. We store that in Local Storage; we'll need to use it in all subsequent calls to Usergrid.

    Usergrid fans will notice that I'm not using the Usergrid JavaScript SDK. That's because Ember.js provides Ember-Data, which acts as a very nice REST client and can be adapted to work with the URL structure and JSON formats of just about any REST API. I'll write about that in part two of this article.

    0 0

    In part one, I explained the basics of the example Usergrid-Ember "Checkin" app, how the index page is displayed and how login is implemented. In part two, I'll explain how Ember.js can be hooked into the Usergrid REST API to store and query JSON objects.

    Ember logo

    Ember.js includes a feature referred to as Ember-Data, which provides a persistence interface for storing and retrieving JavaScript objects that could be stored in memory, or stored on a server and accessed via REST API.

    To use Ember-Data with your REST API you've got to define an Ember-Data model and add an Ember-Data REST adapter. If your REST API differs from what Ember-Data expects then you will probably have to extend the built-in REST adapter to handle your URL pattens, and extend the built-in REST serializer to handle your JSON format. By extending Ember-Data in this way, you can use it to store and query data from Usergrid without using the Usergrid JavaScript SDK at all. Below I'll explain what I had to do to make the Checkin app's Activities collection available via Ember-Data.

    Define Ember-Data models

    Ember-Data expects each of your REST API collections to have a defined data model, one that extends the DS.Model class. Here's what I added for the Activities collection:

    From app.js (link)

    App.Activity = DS.Model.extend({
      uuid: DS.attr('string'),
      type: DS.attr('string'),
      content: DS.attr('string'),
      location: DS.attr('string'),
      created: DS.attr('date'),
      modified: DS.attr('date'),
      actor: DS.attr('string'),
      verb: DS.attr('string'),
      published: DS.attr('date'),
      metadata: DS.attr('string')

    Create a custom RESTAdapter

    The Ember-Data REST adapter expects a REST API to follow some common conventions for URL patterns and for JSON data formats. For example, if your REST API provides a collection of cats then Ember-Data will expect your REST API to work like so:

    What Ember-Data expects for a cats collection:

    • GET /cats - get collection of cats
    • POST /cats - create new cat.
    • GET /cats/{cat-id} - get cat specified by ID.
    • PUT /cats/{cat-id} - update cat specified by ID.
    • DELETE /cats/{cat-id} - delete cat specified by ID.

    Usergrid follows the above conventions for collections, but there are some exceptions. For example, the Usergrid Activities collection. A GET on the /activities path will return the Activities of the users that you (i.e. the currently authenticated user) follow. You don't POST new activities there, instead you post to your own Activities collection at the path /users/{your-user-id}/activities. It works like this:

    Usergrid's Activities collection:

    • GET /activities - get Activities of all users that you follow.
    • POST /user/{user-id}/activities - create new Activity for user specified by ID
    • GET /user/{user-id}/activities - get Activities for one specific user.

    To adapt the Activities collection to Ember-Data, I decided to create a new model called NewActivity. A NewActivity represents the data needed to create a new Activity, here's the model:

    From app.js (Link)

    // Must have a special model for new activity because new 
    // Activities must be posted to the path /{org}/{app}/users/activities, 
    // instead of the path /{org}/{app}/activities as Ember-Data expects.
    App.NewActivity = DS.Model.extend({
      content: DS.attr('string'),
      location: DS.attr('string'),
      actor: DS.attr('string'),
      verb: DS.attr('string')

    Then, in Checkin's custom REST adapter, I added logic to the pathForType() function to ensure that NewActivities are posted to the correct path. Here's the adapter:

    From app.js (Link)

    App.ApplicationAdapter = DS.RESTAdapter.extend({
      host: Usergrid.getAppUrl(),
      headers: function() { 
        if ( localStorage.getItem("access_token") ) {
          return { "Authorization": "Bearer " 
              + localStorage.getItem("access_token") }; 
        return {};
      }.property().volatile(), // ensure value not cached
      pathForType: function(type) {
        var ret = Ember.String.camelize(type);
        ret = Ember.String.pluralize(ret);
        if ( ret == "newActivities" ) {
          // Must have a special logic here for new activity 
          // because new Activities must be posted to the 
          // path /{org}/{app}/users/activities, instead of the 
          // path /{org}/{app}/activities as Ember-Data expects.
          ret = "/users/" + Usergrid.user.username + "/activities";
        return ret;

    You can see a couple of other interesting things in the example above. First, there's the host field which specifies the base-URL of the REST API for the Checkin app. Next, there's the headers() function, which ensures that every request carries the access_token that was acquired during login.

    Create a custom RESTSerializer

    Ember-Data also has expectations about the JSON format returned by a REST API. Unfortunately, what Ember-Data expects and what Usergrid provides are quite different. The two examples below illustrate the differences:

    Ember-Data vs. Usergrid JSON formats

    Ember-Data expects collections like this:

       cats: [{
           "id": "6b2360d0",
           "name": "enzo",
           "color": "orange"
           "id": "a01dfaa0",
           "name": "bertha",
           "color": "tabby"

    Usergrid returns collections like this:

       action: "get",
       path: "/cats",
       count: 2,
       entities: [{
           "uuid": "6b2360d0",
           "type": "cat",
           "name": "enzo",
           "color": "orange"
           "uuid": "a01dfaa1",
           "type": "cat",
           "name": "bertha",
           "color": "tabby"

    Ember-Data expects individual objects like this:

       cat: {
           "id": "a01dfaa0",
           "name": "bertha",
           "color": "tabby"

    Usergrid returns individual objects like this:

       "id": "a01dfaa0",
       "type": "cat",
       "name": "bertha",
       "color": "tabby"

    You can see two differences above. Ember-Data expects JSON objects to be returned with a "type key" which you can see above: the "cats" field in the collection and the "cat" field in the individual object. Also, Ember-Data expects an object's ID field to be named "id" but Usergrid returns it as "uuid."

    The deal with these differences, the Checkin app extends Ember-Data's DS.RESTSerializer. Here's the code:

    From app.js (Link)

    App.ApplicationSerializer = DS.RESTSerializer.extend({
      // Extract Ember-Data array from Usergrid response
      extractArray: function(store, type, payload) {
        // Difference: Usergrid does not return wrapper object with 
        // type-key. So here we grab the Usergrid Entities and stick 
        // them under a type-key
        var typeKey = payload.path.substring(1);
        payload[ typeKey ] = payload.entities;
        // Difference: Usergrid returns ID in 'uuid' field, Ember-Data 
        // expects 'id'. So here we add an 'id' field for each Entity, 
        // with its 'uuid' value.
        for ( var i in payload.entities ) {
          if ( payload.entities[i] && payload.entities[i].uuid ) {
            payload.entities[i].id = payload.entities[i].uuid;
        return this._super(store, type, payload);
      // Serialize Ember-Data object to Usergrid compatible JSON format
      serializeIntoHash: function( hash, type, record, options ) {
        // Usergrid does not expect a type-key
        record.eachAttribute(function( name, meta ) {
          hash[name] = record.get(name);
        return hash;

    In the code above you can see how the extractArray() method moves the "entities" collection returned by Usergrid into a type-key field as expected by Ember-Data and how it copies the "uuid" field to add the "id" field that Ember-Data expects.

    We also need to transform the data that Ember-Data sends to Usergrid. You can see this above in the serializeInHash() function, which ensures that when data is POSTed or PUT to Usergrid, the type key is removed because that's what Usergrid expects.

    Implementing Add-Checkin

    To implement Add-Checkin, I added an HTML template called "add-checkin" to Checkin's index.html file. The template displays an Add-Checkin form with two fields: one for content and one for the location. Here's what it looks like in all its modal glory:

    screenshot of add-checkin page

    Both fields are simple strings (someday I'd like to extend Checkin to use location information from the browser). I won't go into detail here, but it took a bit of research to figure out how to make a Bootstrap modal dialog work with Ember.js. Below you can see the add-checkin controller, which provides a save() function to save a new checkin.

    From app.js (Link)

    App.AddCheckinModalController = Ember.ObjectController.extend({
      actions: {
        save: function( inputs ) {
          var content = inputs.content;
          var location = inputs.location;
          var target = this.get("target");
          var activity = "NewActivity", {
            content: content,
            location: location,
            verb: "checkin",
            actor: {
              username: Usergrid.user.username

            function( success ) { 
            function( error ) { 
              alert("Error " + error.responseJSON.error_description); 

    In the code above you can see how easy it is to access Usergrid data via Ember-Data now that we've got our custom REST adapter and serializer in place. We create a new Activity with a call to and to save it all we need to do is

    Time to wrap up...

    To sum things up, here are some closing thoughts and observations.

    • If you are considering JavaScript MVC frameworks, then Ember.js is definitely worthy of your consideration. The documentation makes it easy to learn and the community is friendly and helpful.
    • It would be great for Usergrid to provide an Ember.js SDK that makes it really easy to build apps with Ember.js and Usergrid.
    • Ember-Data is an integral part of Ember.js, something that you need to do pretty much anything, but it is treated as a separate package with separate documentation. That is somewhat confusing for a new user.
    • Ember-Data does not include built-in form validation so if your app includes a large number of non-trivial forms, then you may prefer AngularJS over Ember.js.
    • There is a form validation plugin for Ember.js, but it requires the experimental Ember-CLI utility. I tried to use it, but Ember-CLI was unpleasnt enough that I gave up.

    I appreciate any feedback you might have about this article, the Usergrid-Ember project and Apache Usergrid. If you want to see how the whole Usergrid-Ember project fits together, find it on GitHub here: Usergrid-Ember. Next up, I'll write about my experiences using Apache Shiro to replace Spring Security in Apache Roller.

    0 0

    For various reasons, I've always got a couple of coding projects on the back burner, things that I hack around with on weekends and breaks. In 2014, I started four projects and learned about Ember.js, jQuery Mobile, Apache Shiro, Apache CXF and the Arquillian test framework.

    I like to share my code, so I've put my code on GitHub and I'm going to write a brief post about each here on my blog. I'll provide links as I go and, of course, I welcome any criticisms and suggestions for improvement that you might have. First up: the Usergrid-Mobile project.

    The Usergrid-Mobile project

    ApacheCon EU logo
    To be honest, Budapest was the goal of this project. In the Spring of 2014, I decided that the best chance of getting to ApacheCon EU in Budapest was to create a great "mobile development with Usergrid" talk, and to do that I needed a great example project. The resulting project shows how to create a dumbed-down Foursquare-style "checkin" app using HTML5, JavaScript, jQuery Mobile and Apache Cordova.

    Luckily for me, my talk was accepted for ApacheCon EU and in November I traveled to Budapest (took some photos) and gave the talk there.

    I also presented the talk at the All Things Open conference in Raleigh, NC and you can view a video of that talk, Mobile Development with Usergrid on YouTube.

    You can find the code for usergrid-mobile on GitHub. I also created a Vagrant File to launch a local instance of Usergrid for demo purposes. It's called usergrid-vagrant.

    That's all for now. Next up: Usergrid-Ember.

    0 0

    Create, build and activate a custom module for an Apache2 HTTP Server.

    0 0

    Time to go public here.  This is one of many matters I’ve been meaning to blog about but wasn’t getting around to.  But this deserves to be on record somewhere public, and I don’t want to rely on Virgin’s forum where I have been posting it.

    My broadband service from Virgin has been misbehaving again.  I’m not sure when it started: it was sometime last year I found myself frequently getting very poor VOIP call quality, which in retrospect was probably a symptom.  Other visible symptoms of the boiling frog included timeouts on the web, and from my mailer.

    It’s slightly reminiscent of my previous troubles with Virgin , a nightmare that bears re-reading.  In some ways not as bad: I haven’t had extended complete cut-offs.  But in other ways worse: it was bad enough running the gamut of menus and adverts trying to phone them before, but this time that’s been replaced with an “on hold” noise that’s some yob screaming extremely aggressively: the kind of thing you’d beat a hasty retreat from if you heard it coming from a nearby street.  I didn’t catch any words, but the sound was a most emphatic “F*** OFF”.

    Anyway, visiting the website, I find there’s no way to file a support ticket, only supposedly-interactive ways to call them, and a community forum.  The interactive ways don’t work, as will become clear below.

    The Forum – once I’ve signed up (groan) – does work, and gets me some helpful replies.  But these aren’t from Virgin, they’re just members of the public.  My thread “Contacting Virgin” there tells the story.  This morning, one post was removed from there.  Not an important post, but if they can remove that then I reckon it’s time to copy the important contents, and not just to the saved page I already have.  So here goes.  My posts verbatim; replies omitted in case any other poster might be bothered by copyright on their words.

    Jan. 30th: 15:53

    I have a problem with virgin broadband: it’s very slow (less than 1% of the theoretical speed[1]) and so intermittent that many things are simply timing out, and phone (VOIP) has become unusable.

    So I tried to contact Virgin. First online, where it tells me their support team are unavailable (yes, this is within the opening hours advertised – most recently today about 15:20). Then by (mobile) ‘phone, where after 4 minutes of menus it puts me indefinitely on hold. Then today I went in person into a Virgin shop, where the staff could (or would) do absolutely nothing, and wouldn’t even let me try to ‘phone customer support from there.

    How the **** do I contact them?

    I have just now taken the precaution of cancelling my direct debit. Maybe that’ll prompt them to contact me?

    [1] e.g. ,

    [first reply tells me I have contacted them by posting, but it’ll take “about a week”, and advises me to post some info from my router]

    Jan 30th: 17:06 (as I was about to head out):

    Thanks Tony. Yes, I’m at my desk, working wired (I use wireless too, but not for things like speedtest). Both are equally affected.

    Sadly this editor won’t accept cut&paste from my router’s status pages. Well, actually it looks fine when I paste it in and in preview, but then rejects it when I try to post. I may try again later, but not now.

    I could add my earlier experience of Virgin failing here, especially

    Feb 8th: 08:32

    Well, my broadband appears to be back. In fact, it’s faster than it’s ever been before, or than I ever asked for: . In fact I seem to recollect that when the man from Virgin came to install my kit for a 30 Mb/s connection, he mentioned explicitly throttling something back for that.

    That (still) doesn’t resolve the issue of contacting Virgin. If it’s pure coincidence that they fixed it after my attempts to contact them. that leaves me in limbo again next time something fails. Alternatively, if something I did (like my session with their menus from the mobile phone, or my posting here) prompted them to fix it silently, that’s an extremely unsatisfactory way to treat clients.

    Either way, there needs to be a way to contact Virgin and get either a fix or at least an acknowledgement that a fault has been logged and will be checked out, rather than leave a customer in limbo! Not to mention an acknowledgement of known faults on Virgin’s status pages (this fault may have been unknown to Virgin until my attempts to contact them, but the one that led to my blog post referenced above was certainly known to them).

    Tony, do you act for Virgin here, or am I still completely un-acknowledged by the company?

    [another helpful reply telling me – among other things – this forum is the best way to contact virgin and suggesting 7-10 days for a reply from staff]

    Feb. 16th, 22:44 (after nasty email from their billing)

    No contact here after two and a half weeks. Perhaps I have to go to ofcom?

    (Ofcom website tells me there’s an ombudsman, but I have to wait 8 weeks before trying them).

    Feb 16th, 23:05 (after an attempt to reply to billing unsurprisingly bounced).

    Seems I can’t reply to their email, either. So for the record, here’s what I just tried to send. There’s a “contact us” link in their email, but that just brings me straight back here!

    On Fri, 16 Feb 2018 14:50:08 +0000
    “Virgin Media” <> wrote:

    > Important information about your Virgin Media Account
    > Account Number: ********
    > Overdue Balance: £33.23

    I have no idea if this address reaches a real human, but
    I shall reply in the hope that it does.

    I need to be able to contact Virgin Media concerning my
    service. I have tried in various ways, without success.
    Please see my thread at

    At the time of the original problem, or probably even of
    that post, I’d have accepted being able to get through to
    a call centre droid. I think now it’s gone beyond that,
    and I’d be looking to speak to a real person, and to
    get at least an apology for the lack of service.

    Another helpful reply commenting on the difficulty contacting them, and concluding with a paragraph that really, really deserves reproducing here:

    A cynic might conclude they do not want to make it easy and do not want you to have any record of their statements, but surely that is just being paranoid?

    Note, the three replies mentioned above are all from different posters.  What they have in common is forum labels describing them respectively as “Superuser”, “Super Solver” and “Knows their stuff”.  I presume those labels are based on their track records in Virgin’s fora.

    Getting up to date, here’s Feb. 19th, 10:31:


    They’ve just ticked another box in a diabolical blame game.

    That is to say, half an hour ago, I got a call to my mobile ‘phone, showing the caller as Virgin Media. When I answered, it wasn’t a human, but a robotic voice asking questions to answer on the keypad.

    Question 1: am I me? Press 1 for yes. OK so far.
    Question 2: enter some password. Erm, WTF? Even if I had a clue what password they’re talking about, how likely is it I’d have it to hand at the moment they call me?

    So now they’ve ticked a box. Call the customer, check. Customer confirms identity, check. But customer hangs up. How many customers could hope to explain that to any kind of adjudicator without appearing now to be firmly in the wrong?

    Well, if anyone’s still reading, thank you.  I hope you’re duly amused.  I shall aim to update here as and when things happen, but no promises.  I do still have a 4G device, which is a faff to use but means at least I’m not completely reliant on Kafka’s castle at Liberty Global.

    0 0

    세상은 복잡계다. 그래서 예측이 불가능하다. 돌이켜보면 꽤나 많은 시간들이 해킹사고 후속 작업에 들어갔고 내 계획은 5%도 실현하지 못했다. 그 때문에 지난 1년은 아쉬움이 크다. 한줄로 요약하면 이렇다, "나에게도 계획은 있었다. 해킹 사고 터지기 전까진”.

    “Everyone has a plan until they get punched in the mouth” — Mike Tyson  

    한편, 그 동안 내가 축적해온 대형 규모의 우아한 지식과 기술은 오히려 쓸모가 없었다. 소규모 스타트업은 그에 맞는 가볍고 당장 할 수 있는 솔루션이지 않으면 매우 어렵기 때문이다. Y 컴비네이터의 전설 폴에 한마디가 귀에 맴돌기 시작했다.

    "Do Things That Don’t Scale” — Paul Graham

    그래서 시작해봤다. 간단한 리뷰 분류하는 기계학습 알고리즘을 젊은 친구들과 함께. 결론부터 말하면 쓸모 없었다. 기술 자체로서의 가치보다는 기술이 동반하는 경제적 가치에 보다 초점을 두어야 함을 깨달았다.

    "The *real* money comes from merchandising. I learned it from this documentary” — Elon Musk

    오늘날의 대부분 O2O는 모바일 정보와 오프라인 구매를 연결·확장하는 비지니스 모델을 취한다. 결국 정보량과 검색의 정확도, 그리고 프로모션으로 귀결된다. 정보량 수집을 가속하고 품질을 높일 수 있는 랭킹모델과 풀텍스트 검색 엔진을 장착하는 것이 내 마지막 프로젝트다.

    "What I guess I'm trying to say is that search is still the killer app” — Eric Schmidt

    0 0

    Today I received my author copies of the new Camel in Action 2nd edition book.

    There were 3 boxes, with 10x, 10x and 5x pieces. They weigh in total 83 lbs (38 kg), so its heavy stuff.

    I had grabbed one book from my latest Boston travel so I have 26 books in total, and one left from the 1st edition.

    I then stacked all the books which goes far up and I was more carefully position the last books to not trip down the stack.

    To dress up for the photo-shoots I dressed up with my red fedora and t-shirt from Red Hat whom has been supportive off work along the way - thank you big red, without your to pay for my coffee and internet, there wouldn't be a 900 page book on Apache Camel.

    And then we played spelling bee.

    Lastly we built a Camel wall. And Mr Camel was thrilled to be on top of the wall.

    Happy reading.

    0 0

    The Apache CXF Fediz subproject provides an easy way to secure your web applications via the WS-Federation Passive Requestor Profile. An earlier tutorial I wrote covers how to deploy and secure a "simpleWebapp" project that ships with Fediz in Apache Tomcat. One of the questions that came up recently on that article was how to enable logging for the Fediz plugin itself (as opposed to the IdP/STS). My colleague Jan Bernhardt has covered this topic using Apache Log4j. Here we will show a simple alternative way to enable logging using java.util.logging.

    Please follow the earlier tutorial to set up and secure the "simpleWebapp" in Apache Tomcat. Note that after a successful test, the IdP logs appear in "logs/idp.log" and the STS logs appear in "logs/sts.log". However no logs exist for the plugin itself. To rectify this, copy the "slf4j-jdk14" jar into "lib/fediz" (for example from here). Then edit 'webapps/fedizhelloworld/WEB-INF/classes/' with the following content:

    This configuration logs "INFO" level messages to the Console (catalina.out) and logs "FINE" level messages to the log file "logs/rp.log" in XML Format. For example:

    0 0

    Damn, I can’t post a comment here.  Both Firefox and Chromium browsers complain of a bogus certificate somewhere at wordpress, and I haven’t the time to dig into that.  Let’s see if it works as a new post.

    Feb. 22nd, 11:32

    More this morning. A call from a number apparently associated either with Virgin Media or with a scam impersonating them, but it stopped before I could get to the phone. And a text message threatening cut off.

    Investigating the phone number, is inconclusive as to whether it’s Virgin or a third-party scam, with some comments offering evidence of the latter. There’s also a thread here on Virgin fora at raising precisely that question. It’s nearly two years old, but no reply from the Virgin team. Presumably another facet of the no-communication policy I’m trying to complain about.

    I also replied to the text message. Unsurprisingly, my reply was flagged undeliverable.

    I’ve also now blogged about this:

    0 0

    This is the third in a series of blog posts on the Apache Sentry security service. The first post looked at how to get started with the Apache Sentry security service, both from scratch and via a docker image. The second post looked at how to define the authorization privileges held in the Sentry security service. In this post we will look at updating an earlier tutorial I wrote about securing Apache Kafka with Apache Sentry, this time using the security service instead of defining the privileges in a file local to the Kafka distribution.

    1) Configure authorization in the broker

    Firstly download configure Apache Kafka using SSL as per this tutorial, except use Kafka To enable authorization using Apache Sentry we also need to follow these steps. First edit 'config/' and add:

    Next copy the jars from the "lib" directory of the Sentry distribution to the Kafka "libs" directory. Then create a new file in the config directory called "sentry-site.xml" with the following content:

    This is the configuration file for the Sentry plugin for Kafka. It instructs Sentry to retrieve the authorization privileges from the Sentry security service, and to get the groups of authenticated users from the 'sentry.ini' configuration file. Create a new file in the config directory called "sentry.ini" with the following content:
    Note that in the earlier tutorial this file also contained the authorization privileges, but they are not required in this scenario as we are using the Apache Sentry security service.

    2) Configure the Apache Sentry security service

    Follow the first tutorial to install the Apache Sentry security service. Now we need to create the authorization privileges for our Apache Kafka test scenario as per the second tutorial. Start the 'sentryCli" in the Apache Sentry distribution.

    Create the roles:
    • t kafka
    • cr admin_role
    • cr describe_role
    • cr read_role
    • cr write_role
    • cr describe_consumer_group_role 
    • cr read_consumer_group_role
    Add the privileges to the roles:
    • gp admin_role "Host=*->Cluster=kafka-cluster->action=ALL"
    • gp describe_role "Host=*->Topic=test->action=describe"
    • gp read_role "Host=*->Topic=test->action=read"
    • gp write_role "Host=*->Topic=test->action=write"
    • gp describe_consumer_group_role "Host=*->ConsumerGroup=test-consumer-group->action=describe"
    • gp read_consumer_group_role "Host=*->ConsumerGroup=test-consumer-group->action=read"
    Associate the roles with groups (defined in 'sentry.ini' above):
    • gr admin_role admin
    • gr describe_role producer
    • gr read_role producer
    • gr write_role producer
    • gr read_role consumer
    • gr describe_role consumer
    • gr describe_consumer_group_role consumer
    • gr read_consumer_group_role consumer
    3) Test authorization

    Now start the broker (after starting Zookeeper):
    • bin/ config/
    Start the producer:
    • bin/ --broker-list localhost:9092 --topic test --producer.config config/
    Send a few messages to check that the producer is authorized correctly. Now start the consumer:
    • bin/ --bootstrap-server localhost:9092 --topic test --from-beginning --consumer.config config/ --new-consumer
    Authorization should succeed and you should see the messages made by the producer appear in the consumer console window.

    0 0

    Curious about anything at the ASF or about Apache projects? Can’t find the best place to ask? Here are a few meta-FAQs about FAQs on Apache, the ASF, and Apache projects and communities.

    How can I get involved at Apache?

    Just get started by emailing your ideas or questions to an Apache project you’re interested in.It’s up to you to start – the best projects to work on are ones you are already interested in. The Community Development project is here to help point you in the right direction.

    How can I become an Apache committer?

    Firstfind a project you are interested in. The best way to get involved is to use an Apache software product that you have a reason to use (even if just curiosity), and then ask questions or submit code patches to that project’s mailing lists.

    It’s all up to you – Apache projects run on merit, which means people who do more work on that project– as measured by the community – get more of a say.

    How many Apache projects are there?

    There are over 190+ Apache software projects and there are over 50 Apache Incubator podlings that are working to become official Apache projects.

    How do mailing lists work at Apache? Where should I email to ask questions?

    Everything at Apache happens on archived mailing lists.
    Find the right list to use. Technical questions always go to that project’s dev@ list – every project is independent and separate. Reading the Apache mail archives is a great way to see what other people are asking.

    How is Apache software licensed? Is it free to use?

    Apache software uses the Apache License, version 2.0. Questions? Contact the Legal Affairs Committee. Apache PMCs with specific questions: open a LEGAL Jira. All Apache software products are always free (no charge) to download and use.

    Does Apache hold any trademarks or brands?

    The ASF owns Apache trademarks, which include all Apache project and software product names and logos. Read useful trademark resources.

    Where can I find press releases or analyst briefings?

    Our Media and Analyst relations team runs @TheASF on Twitter and writes an official Foundation Blog.

    Who does what at Apache?

    See the ASF Org Chart of officers, find committers in the Apache people directory, read Planet Apache blogs.

    How is the ASF organized? Is it a corporation?

    The ASF is a 501C3 non-profit public charity. Members elect a Board of Directors that appoints Officers. Read about our governance and org chart.

    How do I ask Infrastructure for help?

    The crack Apache Infrastructure team runs everything, and protects our servers from rogue gnomes; you can contact Infra here. Remember: all questions about Apache software products should go to that project’s mailing list.

    How do Apache projects work? What’s this Apache Way I’ve heard about?

    Learn about The Apache Way, our community-led consensus behaviors that make Apache projects so efficient and long-lived, or view presentations about the ApacheWay.

    Are donations to the ASF needed? Can I deduct them from my taxes?

    Our non-profit relies on individual Donors and annual Sponsors for ourfunding and budgets. Donate today! (Often tax-deductible in the US!)

    How do I get source code?

    All code at Apache is freely downloadable from our Subversion or Git repositories. Learn how to Setup SVN or Git access.

    Where else can I ask any questions about the ASF?

    Apache Community Development (ComDev) volunteers are here to answer any other questions you have about how Apache communities work. You can read all the past questions on the ComDev mailing list.

    My question wasn’t answered here!

    Add your comments below if there are other questions that need answers – or ask the ComDev project for help!

    The post FAQ of FAQs about Apache Software Everything appeared first on Community Over Code.

    0 0

    I sudo-su-d into my root account and enabled the ssl module:

    $ a2enmod ssl
    $ service apache2 restart
    I installed the letsencrypt client:
    $ git clone
    $ cd letsencrypt
    I shut down my Apache – there is probably a better way, but on a low-traffic test server/domain, that’s entirely okay for me. I then requested a new certificate and started Apache up again.
    $ service apache2 stop
    $ ./letsencrypt-auto certonly --standalone --email -d
    $ service apache2 start
    Now I have a lovely certificate! Yay! Now I went into the config file of my virtual host and changed things around a bit. I am sure that it is frowned upon to put two VirtualHost directives with two different ports into one config file, but… ¯\_(ツ)_/¯
    <VirtualHost *:80>
    Redirect "/"""

    <VirtualHost *:443>
    DocumentRoot "/var/www/example/public"
    DirectoryIndex index.htm
    <Directory /var/www/example/public>
    Options FollowSymLinks
    AllowOverride All
    SSLEngine on
    SSLCertificateFile /etc/letsencrypt/live/
    SSLCertificateKeyFile /etc/letsencrypt/live/
    SSLCertificateChainFile /etc/letsencrypt/live/
    One more apache restart
    $ service apache2 restart

    0 0

    sharkbait posted a photo:

    Iced Cherries

    Snow in Brixton Market

    0 0

    sharkbait posted a photo:

    Brixton became Iceland

    Woke up early this morning, strangely quiet ... it had snowed at last (yesterday people had been complaining about there being none).
    Could not sleep so I thought I'd go out and take some pictures while the snow was still fresh.

    0 0

    This is the fourth in a series of blog posts on the Apache Sentry security service. The first post looked at how to get started with the Apache Sentry security service, both from scratch and via a docker image. The second post looked at how to define the authorization privileges held in the Sentry security service. The third post looked at securing Apache Kafka withe Apache Sentry, where the privileges were defined in the Sentry security service. In this post, we will update an earlier tutorial I wrote on securing Apache Hive using Apache Sentry to also retrieve the privileges from the Sentry security service.

    1) Configure authorization in Apache Hive

    Please follow this tutorial to install and configure Apache Hadoop and Apache Hive, except use version 2.3.2 of Apache Hive, which is the version supported by Apache Sentry 2.0.0. After installation, follow the instructions to create a table in Hive and make sure that a query is successful. Now we will integrate Apache Sentry 2.0.0 with Apache Hive. First copy the jars from the "lib" directory of the Sentry distribution to the Hive "lib" directory. We need to add three new configuration files to the "conf" directory of Apache Hive.

    Create a file called 'conf/hiveserver2-site.xml' with the content:

    Here we are enabling authorization and adding the Sentry authorization plugin. Note that it differs a bit from the hiveserver2-site.xml given in the previous tutorial, namely that we are not using the "v2" Sentry Hive binding as before.

    Next create a new file in the "conf" directory of Apache Hive called "sentry-site.xml" with the following content:

    This is the configuration file for the Sentry plugin for Hive. It instructs Sentry to retrieve the authorization privileges from the Sentry security service, and to get the groups of authenticated users from the 'sentry.ini' configuration file. As we are not using Kerberos, the "testing.mode" configuration parameter must be set to "true". Finally, we need to define the groups associated with a given user in 'sentry.ini' in the conf directory:

    Here we assign "alice" the group "user". Note that in the earlier tutorial this file also contained the authorization privileges, but they are not required in this scenario as we are using the Apache Sentry security service.

    2) Configure the Apache Sentry security service

    Follow the first tutorial to install the Apache Sentry security service. Now we need to create the authorization privileges for our Apache Hive test scenario as per the second tutorial. Start the 'sentryCli" in the Apache Sentry distribution, and assign a role to the "user" group (of which "alice" is a member) with the privilege to perform a "select" statement on the "words" table:

    • cr select_role
    • gp select_role "Server=server1->Db=default->Table=words->Column=*->action=select"
    • gr select_role user
    Now we can test authorization after restarting Apache Hive. The user 'alice' should now be able query the table according to our policy:
    • bin/beeline -u jdbc:hive2://localhost:10000 -n alice
    • select * from words where word == 'Dare'; (works)

    0 0

    I am busy with fitness training to get into shape for a number of upcoming Camel talks I am doing the next 3 to 4 months.

    I would like to give a big thank you to Andrea Cosentino and Zoran Regvart for designing and creating these new awesome Apache Camel t-shirts.


    After the heavy training I went up for running in the cold and snowy weather here in Denmark.

    OS2 meeting in Copenhagen
    On next Thursday March 8th I am attending the Danish OS2 meeting in Ballerup, Copenhagen, Denmark. Another Red Hat'er was supposed to give a talk at this event but he had to cancel, and we are not so many danish employees so they contacted me if I could go. So the audience have to endure my technical talk about Apache Camel microservices on Kubernetes.

    DevNation Live Webinar
    On Thursday March 15th I am doing a webinar hosted by Burr Sutter from Red Hat. The topic is Camel riders in the cloud.
    Apache Camel has fundamentally changed the way enterprise Java™ developers think about system-to-system integration by making enterprise integration patterns (EIP) a simple declaration in a lightweight application wrapped and delivered as a single JAR.
    In this session, we’ll show you how to bring the best practices from the enterprise integration world together with Linux containers, running on top of Kubernetes/OpenShift, and deployed as microservices, which are both cloud-native and cloud-portable.
    The webinar is free to attend and you can register form the DevNation website.

    JPoint Moscow
    In the beginning of April I am traveling for my first time to Russia, Moscow to attend the JPoint conference and give a talk about developing Camel microservices on Kubernetes.

    I will take Mr Camel with me as he don't want to miss the opportunity to drink the great Russian Vodka. And hopefully we'll have Red Hat to sponsor a number of books so we can have giveaways or a book signing.

    Red Hat Summit
    I will then go to San Francisco in the beginning of May to attend and speak at the Red Hat Summit. I will give a Camel talk and then we are doing two workshops, one about Camel development with APIs and another about the new Fuse Online product (low-code integration platform).

    GR8Conf EU
    At the end of May I will be at the gr8conf in Copenhagen, where I will give a Camel talk to the Groovy crowd.

    Barcelona or Copenhagen in June
    And that is not all, the Fuse team is likely going to have a face to face meeting sometime in June so I have to let my calendar be open for that. But hopefully it will fall in place so I have time to submit a paper for the awesome conference in Barcelona. Unfortunately the JDK.IO conference in Copenhagen clashes at the same time, so I cannot be in two places at once. But lets see, maybe they are tired of hearing about Camel's. After all I have been 3 or 4 times at the Barcelona conference.

    Hamburg running
    And I am going to Hamburg in end of April for my first marathon run. I will travel down there on Friday, so hopefully I can catch up with Bennet Schulz whom works on the Camel IDEA plugin. The run is on Sunday.

    0 0

    This past year has been pretty hectic, so I haven't had a chance to update this blog more often.

    In my limited spare time last year, I started contributing to Apache Ant project. Although Ant probably isn't as widely used as some years back, it still is used in many projects, as the build tool. Some of the products I'm involved in, does use Ant and that motivated me to contribute to some bug fixes in Ant. After a period of time, last year, I was invited to be a committer and a few weeks back, to be part of the Ant project management committee (PMC), which I consider a honour.

    Just today, we released a couple of new versions of Ant - 1.9.10 and 1.10.2. These are essentially bug fix releases but do contain some new enhancements. The complete release notes, for each of these releases, can be found  here and here.

    The downloads are available from the project's download page and the full announcement, in the mailing list, can be read here

    If you have any issues/suggestions/feedback about the project, feel free to report it in the user mailing list which is listed on this page.

    0 0

    WildFly 12.0.0.Beta1 has been tagged and has been (I think) officially released. The announcement happened a few days back in the dev mailing list and unlike the previous releases, this time the release binaries seem to be only available in Maven repository and can be obtained from here - WildFly 12.0.0.Beta1 distribution (the .tar.gz and .zip are the relevant ones). The list of changes for this release can be found in the JIRA release notes.

    As you'll notice in the release notes, there's some initial support for EE 8 specs, including Servlet 4.0 among others. Plus there's also numerous bug fixes in this release from the previous 11.0.0.Final version which was released some months back. As usual, please give this version a try and if there are any issues or feedback that you would like to report, please start a discussion in the WildFly user forum

    If you haven't been following the WildFly dev mailing list, there's also a discussion which outlines the release plans for WildFly going forward. You can find that discussion here

    Finally, there's been major changes to Java EE processes and committee and even the name, over the past year. If you haven't been following those changes, then you can read through Mark Little's recent blogs including the most recent ones which talk about the new brand name for Java EE  and setting up of the working group.

    0 0

    In this blog post, I wish to express my thoughts about new features that have been introduced in XML Schema 1.1 while defining XSD simpleType lists. I'd like to write few XML Schema validation examples here illustrating the same.

    Example 1: Using the <xs:assertion> facet, to enforce sorted order on the list data.

    Here's the XSD 1.1 document:

    <?xml version="1.0"?>
    <xs:schema xmlns:xs="" vc:minVersion="1.1"

       <xs:element name="X">
                <xs:element name="a" type="xs:integer"/>
                <xs:element name="b">
                      <xs:restriction base="IntegerList">                   
                         <xs:assertion test="every $val in (for $x in 1 to count($value)-1 return ($value[$x] le $value[$x+1])) 
                                             satisfies ($val eq true())">
                                  Assertion facet checking that, items in the list are in ascending sorted order.

       <xs:simpleType name="IntegerList">
          <xs:list itemType="xs:integer"/>


    A valid XML document, when validated by the above schema document:

    <?xml version="1.0"?>
       <b>-20 1 2 3</b>

    (the integer list in element "b", is sorted)

    An invalid XML document, when validated by the above schema document:

    <?xml version="1.0"?>
       <b>-20 1 2 3 1</b>

    (the integer list in element "b", is not in a sorted order)

    Example 2: Using the <xs:assertion> facet, to enforce size of the list using relational operators other than equality (the equality was supported by XSD 1.0 using the xs:length facet).

    Here's the XSD 1.1 document:

    <?xml version="1.0"?>
    <xs:schema xmlns:xs="" vc:minVersion="1.1"

       <xs:element name="X">
                <xs:element name="a" type="xs:integer"/>
                <xs:element name="b">
                      <xs:restriction base="IntegerList">                   
                         <xs:assertion test="count($value) lt 10">
                                  Assertion facet checking that, cardinality/size of list should be less than 10.

       <xs:simpleType name="IntegerList">
          <xs:list itemType="xs:integer"/>


    A valid XML document, when validated by the above schema document:

    <?xml version="1.0"?>
       <b>-20 1 2 3</b>

    (the integer list in element "b", has less than 10 items)

    An invalid XML document, when validated by the above schema document:

    <?xml version="1.0"?>
       <b>-20 1 2 3 1 1 1 1 1 1 1</b>

    (the integer list in element "b", has more than 10 items)

    Example 3: Using the <xs:assertion> facet, to enforce that each item of list must be an even number.

    Here's the XSD 1.1 document:

    <?xml version="1.0"?>
    <xs:schema xmlns:xs="" vc:minVersion="1.1"

       <xs:element name="X">
                <xs:element name="a" type="xs:integer"/>
                <xs:element name="b">
                       The simpleType definition below, is itemType of this list.
                          Every item of list must be an even number.
                    <xs:restriction base="xs:integer">
                       <xs:assertion test="$value mod 2 = 0"/>


    A valid XML document, when validated by the above schema document:

    <?xml version="1.0"?>
       <b>2 4 6</b>

    (the integer list in element "b", has each item as even number)

    An invalid XML document, when validated by the above schema document:

    <?xml version="1.0"?>
       <b>2 1 6</b>

    (the integer list in element "b", has one or more item not even)

    As illustrated by the above examples, some new XML Schema validation scenarios are possible with the introduction of <xs:assertion> facet on XSD simple types in 1.1 version of the XML Schema language.

    0 0

    This blog is moving to my own hosted site. There are lots of recent articles so please re-subscribe there.

    The new blog location is the new feed is

    The new site is hosted in our new data centre, under the stairs at FiveOne.Org's new headquarters in South London. The site is built using the wonderful Jekyll.

    See you there.

    0 0

    Proxies? In companies getting started with an upstream first concept this is what people are called who act as the only interface between their employer and an open source project: All information from any project used internally flows through them. All bug reports and patches intended as upstream contribution also flows through them - hiding entire teams producing the actual contributions.

    At Apache projects I learnt to dislike this setup of having proxies act in place of the real contributors. Why so?

    Apache is built on the premise of individuals working together in the best interest of their projects. Over time, people who prove to commit themselves to a project get added to that project. Work contributed to a project gets rewarded - in a merit doesn't go away kind-of sense working on an Apache project is a role independent of other work committments - in the "merit doesn't go away" sense this merit is attached to the individual making contributions, not to the entity sponsoring that individual in one way or another.

    This mechanism does not work anymore if proxy committers act as gateway between employers and the open source world: While proxied employees are saved from the tax that working in the public brings by being hidden behind proxies, they will also never be able to accrue the same amount of merit with the project itself. They will not be rewarded by the project for their committment. Their contributions do not end up being attached to themselves as individuals.

    From the perspective of those watching how much people contribute to open source projects the concept of proxy committers often is neither transparent nor clear. For them proxies establish a false sense of hyper productivity: The work done by many sails under the flag of one individual, potentially discouraging others with less time from participating: "I will never be able to devote that much work to that project, so why even start?"

    From an employer point of view proxies turn into single point of failure roles: Once that person is gone (on vacation, to take care of a relative, found a new job) they take the bonds they made in the open source project with them - including any street cred they may have gathered.

    Last but not least I believe in order to discuss a specific open source contribution the participants need a solid understanding of the project itself. Something only people in the trenches can acquire.

    As a result you'll see me try and pull those actually working with a certain project to get active and involved themselves, to dedicate time to the core technology they rely on on a daily basis, to realise that working on these projects gives you a broader perspective beyond just your day job.

    0 0

    As the ASF’s Annual Member’s Meeting approaches this month, the Membership has an opportunity to vote in new individual Members to the Foundation. I’ve written about how member meetings work and have proposed some process improvements.

    But the bigger question is: how can the membership better help the ASF succeed? What a Member can do at the ASF is documented, but what should Members consider doing? Where does the ASF need Members to help out, and how?

    Reminder: these are just my opinions. There are over 600 active Members; there are probably 700 different opinions on the subject. But I hope these ideas help people better understand how the ASF works internally.

    Help Multiple Apache Projects Work Together

    Everyone elected as a Member has worked on one or more Apache projects – code, documentation, evangelism, community – the how doesn’t matter; what the ASF tends to recognize is positive contributions and an understanding of the Apache Way of collaborative development. Members have a unique ability to review all private records at the ASF, including all Apache PMC private lists. This insight – and experience with working with different projects – helps to give Members a broader perspective on how to help community-driven projects succeed.

    This visibility means Members can have a great insight into how many of our projects work, and they often step up to help mentor other projects, even ones they don’t normally code on. Note that Membership does not grant any merit within individual projects; each Apache PMC votes in their own committers independently. But Membership is still a recognized merit, and the perspective on the larger ASF that members can bring is usually recognized positively by project communities. With over 190 independent project communities, cross-pollination of people and ideas is critical to keep us all functioning smoothly with volunteer-run projects.

    Mentor Apache Incubator Podlings

    I believe this is the biggest area I believe the ASF needs help: mentoring new prospective communities wishing to join the ASF. The Incubation process is designed to teach and mentor communities wishing to join the ASF on our processes: legal, brand, infrastructure, and especially community governance in the Apache Way. Since the ASF doesn’t have any paid staff in this area, we rely on the Membership to step up and serve as mentors to these communities.

    It’s also the easiest thing to get involved with – any Member can simply ask to join the Incubator PMC (IPMC); approval is automatic. Members can then ask to become a mentor for any podlings or incoming project proposals. This is a great way that Members can both help train communities on the Apache Way, but also have a chance themselves to see new technologies and communities that want to join Apache.

    We have an amazing set of people volunteering on the IPMC, helping the 50 or so podling communities the Incubator hosts at any one time. But the number of podlings, the varied community backgrounds, and the speed of new proposals arriving means we can always use more help in ensuring all podlings can learn how to become an Apache project.

    Step Up To Help With Corporate Governance Work

    While most people know something about how Apache’s collaborative projects work in building their code, many don’t know about all the work that goes on behind the scenes to provide the corporate services to all the communities at Apache. Members can join in and read all the lists where corporate operations happen: legal services, trademark management, infrastructure planning and maintenance, fundraising for the ASF, filing corporate taxes, and everything else a Delaware chartered corporation needs to do.

    These are all areas of corporate governance, policy management, or financial paperwork and oversight that are very different from the tasks that our projects do creating software and building communities. And since the ASF has very few paid staff/contractors, we rely on Members volunteering to step up and perform these tasks – many of which have hard deadlines, legal consequences, and some of which require specialized experience and knowledge.

    Many Members express curiosity, or ask questions or raise concerns about various areas of operations. But we have too few Members who can regularly put in the time and effort to productively help with the day-to-day tasks in some operational areas. This means that we all too often have to rely on a small pool of members who somehow find the time to keep up with these complex tasks. In some cases – for tasks that are clearly defined, and usually highly time-sensitive – we’ve hired or contracted the work. While that ensures tasks are done properly, it still requires oversight and management by our volunteer officers, and of course requires the budget to pay for those services.

    If You Are An Apache Member

    If you’re a Member interested in my advice, I’d say:

    • Be realistic about how much time and thought you can invest. Don’t get burned out, and don’t over-promise, unless you can also quickly and clearly let other volunteers know that you’ve over-promised. Especially around financial matters (which are HIGHLY time sensitive) we’ve had a number of Members burn out quite painfully, for the person and the organization both.
    • Do your research! All major operations lists are open to Members to review and subscribe. Review the org chart and click on the officer responsibilities, read the mailing list and see who’s there and how it works, and volunteer clearly for what you think you can do.
    • Help out, and be patient. Just like you gain merit in an individual project by showing up and being helpful for a while, help out on list for a while, and be sure to ask questions about how the operations work. Once you’ve shown some expertise and willingness to help with the work, the officer or committee there can help you take it to the next level.

    I hope this helps explain how the ASF works better, and if you are a Member, please feel free to ask me or the officers in any operations area for more information, and volunteer your help!

    The post What Apache Needs In Foundation Members appeared first on Community Over Code.

    0 0
  • 03/09/18--10:43: Jeremy Quinn: Weir [Flickr]
  • sharkbait posted a photo:


    Crossing the Thames to Ash Island at night.
    The fabulous roar of the water over the weir.
    A constant slight shudder in the structure makes clear the primal force of the water.
    Still astonished at what this old iPhone6s will do!

    0 0
  • 03/09/18--14:21: Nick Kew: Concerts
  • Dammit, I should have blogged this a week ago!

    I have three concerts coming up.

    First, one with Rossini’s Petite Messe Solennelle, tomorrow evening (Saturday, March 10th) at St Andrews – Plymouth’s main church.  I can do no better than repeat what I wrote here when I last sang in it – with a different choir:

    a lovely and startlingly unique piece. Perhaps it takes a septuagenarian Old Master – as Rossini was in 1863 – to have the confidence to write something quite so cheekily uncharacteristic of its time. It certainly shows the complete mastery of a lifetime’s experience, together with a creative imagination undulled by age!

    Second, next Sunday, March 18th, with my most regular choir at the Guildhall, Plymouth.  This is a concert of several shorter works from the English repertoire, amongst which Vaughan Williams’ Five Spiritual Songs are the highlight.  Also worth hearing are Rutter’s Gloria, and Stamford’s Songs of the Fleet.  Sadly there’s also some dreary muzak from Karl Jenkins.  This is with the band of the Royal Marines in place of our usual orchestra, and the podium will be shared by both their and our regular conductors.

    The third concert is a programme on the theme of the Christian death and resurrection, to be given at Buckfast Abbey on Saturday, March 24th.  The pick of this chamber concert is probably some gorgeous works by Herbert Howells, and the programme also includes Fauré‘s Requiem and shorter anthems.

    0 0

    At SCaLE16X conference this week in Pasadena for a conversation about the Next Generation Directory-based User Management for Cloud Infrastructure.

    Slides are here

    0 0

    The 2018 2018 Candidates Tournament is underway!

    The official site is having some troubles, but you can find all the games at several other sites, including ChessBase, for example.

    Kramnik is off to a strong early start, with 2.5 points from 3 games, but the action has been lively and it is far too early to see how this goes.

    Must. Find. Time. To. Follow. These. Beautiful. Games!

    0 0

    0 0

    The ASF is holding it’s annual Member’s Meeting next week to elect a new board and a number of new Members to the ASF.  I’m honored to have been nominated to stand for the board election, and I’m continuing my tradition of publiclyposting my vision for Apacheeach year – including my 2017 board statement.

    Please read on for my take on what’s important for the ASF’s future…

    Shane’s Director Position Statement 2018

    IF you would like a director who will:

    • Ensure the board and all officers to communicate more politely, respectfully, and clearly – both within private spaces and especially when communicating with PMCs and our communities.
    • Continue to document the many unwritten rules, best practices, and procedures at the ASF – and explain them clearly, and make them easy to find for newcomers and long-time members alike so that everyone can understand and participate fairly.
    • Provide links to source documents, and insist that proposals for improvements or objections to change are based on facts, documented principles, and goals (like the 5-year plan) that we all agree on.
    • Attend every board meeting, read your reports, and give thoughtful and helpful feedback to our PMCs only when needed, and be available as a resource to all our communities.

    THEN I hope you’ll mark your first place vote for Shane (or second place, if first goes to a newcomer!).

    Apache is doing great, but it needs your help to scale

    I believe that the ASF continues to do amazingly well, and has stayed true to our core values of independent communities. Our success is more than our software and the many Apache communities: the many other Foundations and independent open source projects that have copied the Apache Way show the tremendous positive impact the ASF
    has brought to the world.

    The ASF’s biggest issue is scaling effectively: better supporting all our communities while helping them improve and follow the Apache Way. To do this we need to improve our documentation – so that everyone is working from the same playbook, and so it’s easy to understand and follow. We also need committed and engaged Members – to step in and actively help with corporate operations. And Incubator Mentors are critical – to make sure the 50+ podlings can grow into self-governing communities that truly follow the Apache Way, and are less likely to need help after graduation.

    The two things we need from Apache Members (and committers too!) are – to serve as great mentors to the project communities they work with, and to step up doing the day-to-day operations work of shaping the future of the ASF as an organization spanning all Apache communities.

    Want to know more about Shane?

    See what else I’ve said about being a Director at the ASF in past years.

    I regularly speak at open source conferences and post my slides and videos, and I write about the Apache Way.

    I’ve attended every monthly board meeting (even when not a director) since 2009, except for the month my father passed away.

    Thanks for reading, and I look forward to everyone stepping up to help make the ASF a great place for communities to work together positively!

    The post Shane’s Director Position Statement 2018 appeared first on Community Over Code.

    0 0

    We have just released Apache Camel 2.21 and I will in this blog highlight the noteworthy changes.

    This release do NOT support Spring Boot 2.
    Support for Spring Boot 2 will come in Camel 2.22,
    which we plan to release before summer 2018.

    1) Working with large JMS messages 
    We have added better support for working with large messages in streaming mode in the JMS component. I have previously blogged about this.

    2) FTP supports resume download
    The FTP component can now resume downloads. For example if you download very big files, and have connectivity issues, then the FTP consumer will be able to resume the download upon re-connect.

    3) FTP with pollEnrich
    The FTP component has been improved to work better with the pollEnrich (Content Enricher EIP) to poll in a file on-demand. Now the current thread is used for this in a more synchronous task instead of starting the scheduled scheduler (which the regular consumer uses).

    4) FTP activity logging
    The FTP component now reports more activity when it downloads, uploads, scan for files etc, which you can see in JMX and in the logs (you can set the logging level). This should help better to track how much has been download/uploaded of the files and what remains.

    5) Easier configuration of RabbitMQ
    The RabbitMQ component can now be configured on the component level where you can setup broker details, logins, etc so you do not have to repeat this in all the endpoint urls. This is similar to how you use the other messaging components such as JMS.

    6) Spring Boot route actuators
    The Camel SB actuators is now in read-only mode by default. The route actuator endpoints can have the read-only mode turned off, which allows to manage the lifecycle of the Camel routes. In addition more details can be retrieved such as a XML dump of the routes.

    7) Rest DSL API-Doc with examples
    The Rest DSL can now also include examples in the DSL which allows to generte the Swagger/OpenAPI doc with examples included.

    8) Claim Check EIP
    There is a new Claim Check EIP which makes it much easier to store information from the exchange during routing, and then retrieve that later (think it like a push/pop). You can find more detail in the EIP doc.

    9) Sage EIP
    There is a new Saga EIP for simulating transaction in distributed systems. The Saga EIP has plugins for different Saga services, that orchestrate the transactions.

    10) More components
    And as usual there are more components. For example there are 2 new AWS services for KMS and MQ. There is also our first component to integrate with crypto currencies.

    11) Testing with route coverage
    We have added support for running unit tests with route coverage reports turned on. This allows you to check if you have tests that coverage all paths of your routes. Camel tools such as the Camel IDEA plugin will work on adding support for presenting the report, and have indicators in the source code about the coverage (eg like you have for Java code coverage). The Camel Maven Plugin has a goal to output the route coverage.

    12) Testing with advice-with output before vs after
    When using advice-with we now log the before vs after routes in XML to make it easier to developers to see what their advices have changed in the routes.

    There are a bunch of other smaller improvements and other things I have left out, forgot about. You can find more details in the Camel 2.21 release notes.

    0 0

    0 0

    With Apache board elections coming up soon, an ASF Member came up with a great set of questions for all director candidates. With permission, I’m sharing those questions here, and providing my answers as well.

    I’ve also posted my own Director Position Statement for this year (and past years!).

    Questions For Board Candidates

    Missions, Visions…and Decisions

    The ASF exists with a primary goal of “providing open source software to the public, at no charge”. What do you consider to be the foundation’s most important secondary (implicit) goal?

    Fostering independent and collaborative communities; in particular by mentoring and encouraging our contributors and members to step up and become mentors and exemplars of the Apache Way so they can help others.

    Looking ahead, 5 years, 10 years…what do you hope the biggest change (that you can conceivably contribute to) to the Foundation will be, if any? What are your greatest concerns?

    That all Board < -> PMC communications are simple, concise, and friendly.

    Think about the phrase “the board is a big hammer” – is that really how we want to operate? I’d rather have us conceptually expand the IPMC Mentor < -> Podling PPMC relationship, where the board (and the Membership at large) serve as regular and helpful mentors to Apache projects throughout their lifetime. In the very rare cases where a PMC does go off the rails, we need the board as a whole to cleanly engage as a kind but firm mentor, making it a friendly learning process, not a mixed message of a hammer from multiple people.

    My greatest concern is our entirely flat hierarchy: we can’t change that, but we must learn how to focus it and keep it polite. Flat works on a project scale, because everyone the PMC has one codebase they all use. Flat at the Foundation level means we have 600+ Members who can each jump in to try to fix a problem in a single PMC.

    Which aspect(s) (if any) of the way the ASF operates today are you least satisfied with? What would you do to change it?

    Discussion behavior on private mailing lists. Our projects have pretty good behavior – because they’re smaller active communities, and because they are focused on software, where there tend to be obvious answers to technical questions.

    At the Member level, we need to deal with corporate operations and policy making. Budget, legal, brand, press, – these are areas that 1) many of us don’t have deep expertise in, and 2) don’t have as obvious or simple answers as software problems do.

    Ensuring that the Membership, the board, and all our officers keep conversations here focused, polite, and productive is our biggest challenge. Since we rely on volunteers for all policy making, making this process more welcoming is the first step to improving the actual operations and work we do to provide support to all our projects.

    Budget and Operations

    Which roles do you envision moving towards paid roles. Is this the right move, and if not, what can we do to prevent/delay this?

    Infra, obviously: keeping our infra team fully staffed – anytime we can provide more services safely with our own staff to our projects helps ensure their independence from corporate donors.

    The other area is editorial help: while we have many, many useful emails, web pages, and presentations explaining the Apache Way, we need to organize them all and provide well-written materials that everyone can find and understand easily, even newcomers. Even a short-term information architect or editor to organize the developer portal and Community Development website, and give a focus and structure to the great volunteer-created content we have would be incredibly valuable. Volunteers do a great job on the parts; having a coherent whole would mean we can succeed in reaching our communities easily.

    What, if anything, would you do to ensure that budgets are approved on time? does it even matter?

    It absolutely matters – both to our suppliers (like Virtual and our contractors!) and to our volunteer officers who manage expenses. The first step is to clearly document the process – it’s all explained in emails on the Operations list each year, but we don’t have a comprehensive budget calendar and process documented on the website at a stable URL. That is simple to fix – modulo volunteers to actually write, edit, and check the work in.

    80%+ of budgets should be super-simple and obvious – we need cloud servers and infra peeps. The rest of budget should be clear proposals by volunteer officers stating a need, a plan to meet that need, and a cost for the board’s consideration.

    If you had to pick a keyword for budget planning and execution, which would it be? Things like “transparency”, “timeliness”, “cost-effective”

    Transparency, clearly documented process.

    Membership and Governance

    Should the membership play a more prominent role in decision-making at the ASF? If so, where do you propose this be?

    Yes – on the same lines as merit within our project communities. That means providing alternatives or documented reasons for -1s; it also means showing up to help do the work for +1s.

    Membership brings access to all areas of Foundation operations, but we need to remember that each operational area has its own merit that should be earned independently. Being a Member means you can show up on any list but doesn’t mean you get a binding vote there yet.

    What would be your take on the cohesion of the ASF, the PMCs, the membership and the communities. Are we one big happy family, or just a bunch of silos? Where do you see it heading, and where do we need to take action, if anywhere?

    We’re a bunch of big happy families. Happy is good, but different families means we often have different views of what being an Apache project means.

    We need two things: better document what the “Apache Way” means, and what the board expects projects to do (or not), and more organized and cohesive mentoring and culture sharing across all of our projects – and across Foundation operations too.

    If you were in charge of overall community development, what would you focus on as your primary and secondary goal? How would you implement what you think is needed to achieve this?

    The real need for the ComDev PMC is to focus and empower our many volunteers to improve our overall documentation, education, and social message as a cohesive whole that helps our communities do their work.

    The key tasks are to put forth some clear goals, and mentor/help/encourage all committers that show up on how to build effective and lasting materials that we can use and re-use across all our communities – and outside our communities as well. We have lots of individual volunteers and separate bits of content – what we need now is tying them together, and ensuring we have better writing and editing so the information is easy to approach and understand.

    Show and Tell

    What is, in your view, your proudest accomplishment in your time at the ASF? How’d that make you feel?

    Creating our comprehensive set of trademark and branding policies, including explanations of how to deal with issues suited to our communities, both in terms of content and expertise level. Bonus achievement: other organizations have copied parts of those policies and procedures, and I’ve been asked by FOSS leaders and lawyers at other organizations for help as well. That makes me feel great, for achieving something important enough that others want to use it.

    Which abilities/skills do you think you’ll bring to the board, that would improve or strengthen the foundation?

    Clear and polite communication and organization skills, to help us be efficient at making decisions, and ensure that our discussions, records, and any policy or best practices we decide on are easy for the world to understand – both the how, and the why.

    Who do you admire the most at the foundation (past/present), and may we know why?

    Brian Behlendorf, one of the ASF’s founders, wrote lots of cool code, ran servers, all sorts of amazeballs useful stuff, etc. But I admire him for his humility, kindness, and helpfulness to everyone who asks.

    Very early committers remember when they first got their Apache accounts, Brian would personally send friendly and welcoming root@ emails with their new account details – since some of the original ASF servers were in his house. That friendliness and willingness to help is my goal.

    Ponies or gnomes? (yes/no?)

    Ponies, of course! Trick question – gnomes muck up all the works.

    Thanks for the great questions!

    The post Where Is The ASF Going? Director Q&A appeared first on Community Over Code.

    0 0

    Once again late to the party, I came across Neil Gaiman's American Gods.

    And devoured it.

    My reaction to Neil Gaiman, in general, is quite similar to my reaction to Stephen King: amazing, fascinating, compelling books, but often the subject matter, or theme, or setting, is too disturbing for me and I avoid even attempting the book.

    American Gods is plenty disturbing, no doubt about it.

    But it is also intoxicating and absorbing.

    Whenever I think about Stephen King, and how he must work, I envision that there is some moment where he suddenly gets an idea, vivid and remarkable, and then he develops it and develops it and develops it, and the result is The Dark Tower, or some such.

    With American Gods, I wonder if the original spark for Gaiman was actually captured in the title of the book, and perhaps went something like this: Who are the American Gods? We know about Norse Gods, and Greek Gods, and Egyptian Gods, and Chinese Gods, so surely there must be American Gods?

    And as he thought about this, perhaps he thought, well: people came to America, and so perhaps their gods came to America, too?

    Hyacinth learned some French, and was taught a few of the teachings of the Catholic Church. Each day he cut sugar cane from well before the sun rose until after the sun had set.

    He fathered several children. He went with the other slaves, in the small hours of the night, to the woods, although it was forbidden, to dance the Calinda, to sing to Damballa-Wedo, the serpent god, in the form of a black snake. He sang to Elegba, to Ogu, Shango, Zaka, and to many others, all the gods the captives had brought with them to the island, brought in their minds and their secret hearts.

    And yet, gods also emerge from a place, so what sort of gods might emerge from America? Well it would depend a lot on what Americans believed in:

    "I can believe things that are true and I can believe things that aren't true and I can believe things where nobody knows if they're true or not. I can believe in Santa Claus and the Easter Bunny and Marilyn Monroe and the Beatles and Elvis and Mister Ed. [...] " She stopped, out of breath.

    Shadow almost took his hands off the wheel to applaud. Instead he said, "Okay. So if I tell you what I've learned you won't think that I'm a nut."

    "Maybe," she said. "Try me."

    "Would you believe that all the gods that people have ever imagined are still with us today?"

    "... maybe."

    "And that there are new gods out there, gods of computers and telephones and whatever, and that they all seem to think there isn't room for them both in the world. And that some kind of war is likely."

    But what would happen as these new gods emerged? And what would happen to those old gods, here in America?

    "This is a bad land for gods," said Shadow. As an opening statement it wasn't Friends, Romans, Countrymen, but it would do. "You've probably all learned that, in your own way. The old gods are ignored. The new gods are as quickly taken up as they are abandoned, cast aside for the next big thing. Either you've been forgotten, or you're scared you're going to be rendered obsolete, or maybe you're just getting tired of existing on the whim of people."

    The problem is, as Gaiman observes, that America is America, and that has some pretty serious consequences, both for the old and for the new:

    There was an arrogance to the new ones. Shadow could see that. But there was also a fear.

    They were afraid that unless they kept pace with a changing world, unless they remade and redrew and rebuilt the world in their image, their time would already be over.

    American Gods is already 17 years old, and as I read through it I thought it was fated to be a book stuck in a certain time. After all, for a book about "gods of computers and telephones and whatever," there isn't a self-driving car or a social media app or a virtual reality headset to be found anywhere in the book.

    But as Gaiman, an Englishman and yet also a converted American, knows deeply in his soul, so much of what makes America America is distinct from the momentary matters of a certain time or place:

    "The battle you're here to fight isn't something that any of you can win or lose. The winning and the losing are unimportant to him, to them. What matters is that enough of you die. Each of you that falls in battle gives him power. Every one of you that dies, feeds him. Do you understand?"

    Laser-focused and razor-sharp, Gaiman's clarity of vision and courage to let the truth emerge from the telling produces a sure and solid result, a book that doubtless will be read and re-read decades from now, for its story, in the end, is timeless.

    0 0

    We're nearly halfway through the 2018 Candidates Tournament (6 of the 14 rounds have been played).

    The contest is hard-fought, with not much space from first (Caruana) to last (Karjakin). There have been 9 decisive results, and 15 draws. Of the decisive results, 5 have been with the white pieces, and 4 with the black pieces. Kramnik's games have been the sharpest, as he has had 2 wins, 2 losses, and 2 draws. Only Ding Liren, the 25-year-old Chinese superstar, has no decisive results yet, playing 6 draws so far.

    Meanwhile, if all these beautiful, if deep and mysterious, grandmaster chess games aren't providing you enough entertainment, perhaps you need to liven things up (and no, I don't mean you should start rooting for the University of Maryland Baltimore County Retrievers, wonderful though last night's result was)?

    Rather, you could get your way over to Twitch, and tune in to the hottest e-Sport online: I Want My ChessTV

    Compare that to a typical session with the Chessbrahs, the most popular chess streamers on Twitch. Over the course of one of their streams, which can last up to four hours, you might see chairs thrown amid a torrent of f-bombs, freestyle rapping mid-game, and a never-ending barrage of trash talk. This is the new, online era of chess—set to the soundtrack of dance music.

    Although certainly not the same thing as the Chessbrahs, chess as an e-Sport is finding, perhaps, some real traction.

    Here, locally, there's a significant e-Sports chess event just a few weeks away: PRO Chess League Finals Set For San Francisco

    The world's best chess players will travel to San Francisco to compete in a live championship organized by and Twitch, the companies announced today. This epic event will be the culmination of's Professional Rapid Online (PRO) Chess League, a groundbreaking, season-long competition with the world's top chess players representing international regions. The two-day event kicks off at 10 a.m. on April 7 at the Folsom Street Foundry and will also be live-broadcast exclusively on’s Twitch channel (

    Twitch have immense resources behind them, as they are part of Amazon, now.

    So, who knows? Maybe this is really a thing?

    0 0

    py-fortress is a Python API implementing Role-Based Access Control level 0 – Core.  It’s still pretty new so there’s going to be some rough edges that will need to be smoothed out in the coming weeks.

    To try it out, clone its git repo and use one of the fortress docker images for OpenLDAP or Apache Directory.  The README has the details.

    py-fortress git repo

    The API is pretty simple to use.

    Admin functions work like this

    # Add User:
    admin_mgr.add_user(User(uid='foo', password='secret'))
    # Add Role:
    # Assign User:
    admin_mgr.assign(User(uid='foo'), Role(name='customer'))
    # Add Permission:
    admin_mgr.add_perm(Perm(obj_name='shopping-cart', op_name='checkout'))
    # Grant:
    admin_mgr.grant(Perm(obj_name='shopping-cart', op_name='checkout'),Role(name='customer')) 

    Access control functions

    # Create Session, False means mandatory password authentication.
    session = access_mgr.create_session(User(uid='foo', password='secret'), False)
    # Permission check, returns True if allowed:
    result = access_mgr.check_access(session, Perm(obj_name='shopping-cart', op_name='checkout'))
    # Get all the permissions allowed for user:
    perms = access_mgr.session_perms(session)
    # Check a role:
    result = access_mgr.is_user_in_role(session, Role(name='customer'))
    # Get all roles in the session:
    roles = access_mgr.session_roles(session)


    In addition, there’s the full compliment of review apis as prescribed by RBAC.  If interested, look at the RBAC modules:

    Each of the modules have comments that describe the functions, along with their required and optional attributes.

    Try it out and let me know what you think.  There will be a release in the near future that will include some additional tooling.  If it takes off, RBAC1 – RBAC3 will follow.

    0 0

    Last week I presented a DevNation Live session - Camel Riders in the Cloud.
    Apache Camel has fundamentally changed the way enterprise Java™ developers think about system-to-system integration by making enterprise integration patterns (EIP) a simple declaration in a lightweight application wrapped and delivered as a single JAR. In this session, we’ll show you how to bring the best practices from the enterprise integration world together with Linux® containers, running on top of Kubernetes / OpenShift, and deployed as microservices, which are both cloud-native and cloud-portable.
    The video of this talk has not been processed and uploaded to YouTube - enjoy.

    The slides and source code is available on github at:

    I would also encourage you to take a look at the other past sessions from DevNation Live. For example Burr Sutter's excellent session on Istio.

    0 0

    전형적인 미드 중 하나로 .. 뭔가 스파이 비밀조직에서 팀을 이루고 문제를 해결해나가는 스토리다. 한국은 누구랑 친해? 또는 누구랑 놀지마, 친하게 지내, 그리고 경쟁적이고 아파트 평수로 파를 가르는 문화가 있는데 아이를 키우는 입장에서 정말 부럽다는 생각이 든다. 다양성을 인정하고 팀으로 문제를 해결하는 환경으로 바뀌어나갔으면 좋겠어.

    0 0

    You have probably heard that all companies are now software companies and that to compete you need to be embracing cloud native applications to ensure your company has the ability to adapt quickly - or you will soon be out of business. This is true by the way, but its also a little daunting for all of us. You have to be an expert in your programming languages of choice, be well versed in how your business works, and now need to understand what exactly cloud native means, what are containers and what are the latest trends in cloud technologies.

    So lets make a quick list of things you should be doing:

    1. Decompose your monolithic applications - or not. To generalise, you probably really should give this a lot of careful thought: its domain and application specific - i.e. only you can decide if it make sense (but it probably does).

    Image result for kubernetes logo2. Containers - or Docker or Moby - which really you need to think about containerd, or rkt  or something that complies with the OCI. Which is why people talk about containers - its all got too confusing.

    3. Orchestrating your containers - i.e. service discovery, auto scaling, fault tolerant platform to run your containerised applications. Thankfully, this recently got a whole lot easier - with Kubernetes emerging as the one that will be supported by all the major public clouds, and existing cloud platforms.

    4. Configure, release version, rollback and inspect your application deployments in a standard, or de facto standard way. Luckily Helm is excellent for this.
    5. CI/CD - in order the extract the most value from cloud native, you need a continuous delivery system that will enable a predictable, repeatable release process, and enable continuous improvement via streamlining your development processes.
    6. Monitoring - ideally you want to monitor the performance of your deployed applications and feed this back into your CD system.

    This is an intimidating list to look at, but its probably already out of date because technology is moving at such a pace its really difficult to keep up.

    @jenkins-xWhat if you could abstract yourself away from all the technology bleeding edge and save yourself the paper cuts - and just concentrate on adding business value? Well this is the aim of Jenkins X - a new project that is part of the Jenkins ecosystem, as explained by James Strachan here.

    We started Jenkins X at the begining of the year, taking the experience gained from the fabric8 project to develop an open source system that was targeted at these aims.  By concentrating on Kubernetes and utilising its wider ecosystem at the Cloud Native Computing Foundation, we have been able to develop a robust, targeted project that focuses on the needs of developers of cloud native apps.

    In summary Jenkins X provides the following:

    1. Abstracts away the gnarly bits of cloud native (you probably don't want be concerned with using skaffold for example). Its all still there, you can peel back the curtain as much as you like, but its nicer if you don't have to.
    2. Automated CI/CD Pipelines - using Jenkins, configured to work well for cloud native
    3. Environments - promotion using git based work flows, and preview environments  for pull requests.

    This is all setup for you - you can aim Jenkins X at an existing project (or create a new one from scratch) - and select your cloud provider of choice and let us do the rest.

    A small(ish) caveat: we target Kubernetes, so check here first.

    Most public cloud providers allow free trials, or provide credits - which is more than enough to kick the tyres with Jenkins X - or you could try a local-machine solutions, such as Minikube.

    0 0

    On the day that both Claire's and Toys 'R' Us file for bankruptcy, perhaps we can pause briefly and contemplate:

    • How vulture capitalists ate Toys 'R' Us
      After big success in the 1980s, Toys 'R' Us' performance turned lackluster in the 1990s. Sales were flat and profits shrank. Toys 'R' Us was a public company at the time, and the board of directors decided to put it up for sale. The buyers were a real estate investment firm called Vornado, and two private equity firms named KKR and Bain Capital. [...]

      The trio put up $6.6 billion to pay off Toys 'R' Us' shareholders. But it was a leveraged buyout: Only 20 percent came out out of the buyers' pockets. The other 80 percent was borrowed. Once Toys 'R' Us was acquired, it became responsible for paying off that massive debt burden[...]


      Whatever magic Bain, KKR, and Vornado were supposed to work never materialized. From the purchase in 2004 through 2016, the company's sales never rose much above $11 billion. They actually fell from $13.5 billion in 2013 back to $11.5 billion in 2017.

    • Claire's Plans Bankruptcy, With Creditors Taking Over
      Claire’s Stores Inc., the fashion accessories chain where legions of preteens got their ears pierced, is preparing to file for bankruptcy in the coming weeks, according to people with knowledge of the plans.

      The company is closing in on a deal in which control would pass from Apollo Global Management LLC to lenders including Elliott Capital Management and Monarch Alternative Capital, according to the people, who asked not to be identified because the matter isn’t public. Venor Capital Management and Diameter Capital Partners are also involved, the people said. The move should help ease the $2 billion debt load at Claire’s.

    • America’s ‘Retail Apocalypse’ Is Really Just Beginning
      The root cause is that many of these long-standing chains are overloaded with debt—often from leveraged buyouts led by private equity firms. There are billions in borrowings on the balance sheets of troubled retailers, and sustaining that load is only going to become harder—even for healthy chains.

      The debt coming due, along with America’s over-stored suburbs and the continued gains of online shopping, has all the makings of a disaster. The spillover will likely flow far and wide across the U.S. economy. There will be displaced low-income workers, shrinking local tax bases and investor losses on stocks, bonds and real estate. If today is considered a retail apocalypse, then what’s coming next could truly be scary.

    0 0

    0 0

    • SXSW 2018: A Look Back at the 1960s PLATO Computing System – IEEE Spectrum

      Author Brian Dear on how these terminals were designed for coursework, but students preferred to chat and play games […] “Out of the top 10 programs on PLATO running any day, most were games,” Dear says. “They used more CPU time than anything else.” In one popular game called Empire, players blast each other’s spaceships with phasers and torpedoes in order to take over planets.
      And PLATO had code review built into the OS:
      Another helpful feature that no longer exists was called Term Comment. It allowed users to leave feedback for developers and programmers at any place within a program where they spotted a typo or had trouble completing a task. To do this, the user would simply open a comment box and leave a note right there on the screen. Term Comment would append the comment to the user’s place in the program so that the recipient could easily navigate to it and clearly see the problem, instead of trying to recreate it from scratch on their own system. “That was immensely useful for developers,” Dear says. “If you were doing QA on software, you could quickly comment, and it would track exactly where the user left this comment. We never really got this on the Web, and it’s such a shame that we didn’t.”

      (tags: platocomputinghistorychatempiregamingcode-reviewcodingbrian-dear)

    0 0

    Last week while helping my wife load a household appliance we were donating into her aunt’s pickup was the sickening sound of my right bicep detaching itself from the elbow distal tendon.  The pain was bad, of course, but the realization of the extent of the injury was worse.  Suddenly plans of completing a 3rd consecutive DirtyKanza were nixed.  In addition to a surgical reattachment, performed yesterday, there’s several months of recovery and rehab before I can return to riding once again.

    In situations like this one must focus on the positives.

    • Injury to right arm and I’m left-handed.
    • We have health insurance and can take the steps necessary for a full recovery.
    • Support of a wonderful family, friends and employer.
    • I can still code.
    • Inside trainer to maintain conditioning on order.

    There’s not much value in thinking about the negatives or what-ifs.  Life has a way of throwing curve balls.  Find a way of knocking the cover off it anyway.

    As far as what’s next.  I already mentioned the trainer which will be a way to maintain conditioning during the lull.



    Once the splint comes off and the brace is opened enough to hold on I am going to try and get some rides in (despite doctor’s orders) and we’ll see what happens come June 2nd.


    photo courtesy of



    0 0

    "Open up the champagne, pop! 🍾" -- Flo Rida, My House

    I’m thrilled to announce that Hefe, my 1966 21-Window VW Bus, is finally finished!

    It only took 4,342 days, starting on April 17, 2006 and ending just a couple weeks ago (March 7, 2018).

    When I last wrote about Hefe, I mentioned he was in the shop getting a better stereo.

    For Hefe's stereo, I tried going phone-only for a controller. This turned out to be a bad idea, mostly due to bit Play HD and its terrible mobile app. Also, Hefe is lowered and a bit bumpy in the front, so trying to use a touch screen while driving doesn't work very well. He's in the shop now getting a new deck installed.

    My dad and I visited Elevated Audio to pick him up two weeks ago today. I’ve known the owner, Andrew, ever since I hired him to install a sweet system in Stout the Syncro in 2013. Back then, his business was named Andrew’s Installs. Fast forward five years and his business is thriving. For a good reason too, his team and their attention to detail is magnificent.

    It sounds fucking incredible.

    Having Hefe finished sometimes makes me misty eyed when I drive him.

    I was especially pumped to get Hefe back because we’d signed him up to be in Denver’s St. Patrick’s Day Parade.

    That’s where the untold story begins.

    Last Friday evening, I washed and polished him to get ready. While cleaning him, I accidentally sprayed a bunch of water on the engine. It’s a no-no to drench a car’s engine when it’s not running. I’d done this to our Syncro six months after we got it and it might’ve contributed to our engine’s untimely death.

    Hefe's all dressed up and ready for Denverâs St. Paddy's Parade tomorrow.Hope to see you there! 😍👌🎉

    After I finished, I tried to start Hefe. The engine turned over just fine, but it’d barely fire and never catch. I pumped the gas pedal a bunch and eventually gave up thinking I'd flooded the engine. I told myself to revisit the problem in an hour; maybe things would dry out by then.

    I didn’t tell Trish about the problem until I’d tried (and failed) to start him an hour later. I took off the distributor cap and dried things out. I wiggled and re-routed some wires. Moving wires around made the spark plugs fire but in the wrong order. I reverted my changes and told Trish the bad news.

    We couldn’t be in the parade without a running bus.

    I cursed, loudly.

    Trish’s high-school friend was flying in from NYC with her family that night. Trish left for the airport to pick them up, suggesting “we could go skiing instead” as she left.

    Shortly after, I recognized my lousy attitude and vowed to turn things around.

    “Now it’s flooded,” I thought. I knew the wires were correct.

    I threw on my University of Denver hockey jersey and went to my living room to finish watching them in a playoff game. They beat the Minnesota Duluth Bulldogs 3-2, and I celebrated with a cold Guinness.

    Then I strolled outside, sat in Hefe, told him he could do it and started him right up. 💥

    Wahoo! He recovered!!

    The parade was epic.

    Lined up and ready for Denver’s #stpaddysday parade! 🍀

    We all felt glorious; cranking the stereo, blowing bubbles out the top, dancing up a storm, and basking in the happiness that is downtown Denver on St. Paddy’s Day. 🍀🤗

    Party's all aroundBubbles

    Yes, there will likely be more to do to Hefe in the coming years. That’s OK. He inspires smiles every time I drive him and providing joy to people is a beautiful experience.

    Kudos to all seven Colorado shops that made Hefe possible. I won’t say he’s worth every penny, but he’s pretty darn close! 😍

    0 0

    Blog post edited by Christian Schneider

    Getting StartedWith this post I am beginning a series of posts about Apache Karaf, an OSGi container based on Equinox or Felix. The main difference to these frameworks is that it brings excellent management features with it.Outstanding features of Karaf:

    • Extensible Console with Bash like completion features
    • ssh console
    • deployment of bundles and features from maven repositories
    • easy creation of new instances from command line

    All together these features make developing server based OSGi applications almost as easy as regular java applications. Deployment and management is on a level that is much better than all applications servers I have seen till now. All this is combined with a small footprint as well of karaf as the resulting applications. In my opinion this allows a light weight development style like JEE 6 together with the flexibility of spring applications.

    Installation and first startup

    • Download Karaf 4.0.7 from the Karaf web site.
    • Extract and start with bin/karaf

    You should see the welcome screen:

            __ __                  ____
           / //_/____ __________ _/ __/
          / ,<  / __ `/ ___/ __ `/ /_
         / /| |/ /_/ / /  / /_/ / __/
        /_/ |_|\__,_/_/   \__,_/_/
      Apache Karaf (4.0.7)
    Hit '<tab>' for a list of available commands
    and '[cmd] \--help' for help on a specific command.
    Hit '<ctrl-d>' or 'osgi:shutdown' to shutdown Karaf.

    Some handy commands

    Command Description
    Shows all installed bundles
    list Show user bundles
    Shows the active OSGi services. This list is quite long. Here it is quite handy that you can use unix pipes like "ls | grep admin"
    Shows exported packages and bundles providing them. This helps to find out where a package may come from.
    Shows which features are installed and can be installed.
    feature:install webconsole

    Install features (a list of bundles and other features). Using the above command we install the Karaf webconsole.

    It can be reached at http://localhost:8181/system/console . Log in with karaf/karaf and take some time to see what it has to offer.

    diag Show diagnostic information for bundles that could not be started
    log:tail Show the log. Use ctrl-c to  go back to Console
    Ctrl-d Exit the console. If this is the main console karaf will also be stopped.

    OSGi containers preserve state after restarts

    Please note that Karaf like all osgi containers maintains it´s last state of installed and started bundles. So if something should not work anymore a restart is not sure to help. To really start fresh again stop karaf and delete the data directory or start with bin/karaf clean.

    Check the logs

    Karaf is very silent. To not miss error messages always keep a tail -f data/karaf.log open !!

    Tasklist - A small osgi application

    Without any useful application Karaf is a nice but useless container. So let´s create our first application. The good news is that creating an OSGi application is quite easy and
    maven can help a lot. The difference to a normal maven project is quite small. To write the application I recommend to use Eclipse 4 with the m2eclipse plugin which is installed by default on current versions.

    Get the source code from the Karaf-Tutorial repo at github.

    git clone

    or download the sample project from and extract to a directory.

    Import into Eclipse

    • Start Eclipse Neon or newer
    • In Eclipse Package explorer: Import -> Existing maven project -> Browse to the extracted directory into the tasklist sub dir
    • Eclipse will show all maven projects it finds
    • Click through to import all projects with defaults

    Eclipse will now import the projects and wire all dependencies using m2eclipse.

    The tasklist example consists of these projects

    Module Description
    tasklist-model Service interface and Task class
    tasklist-persistence Simple persistence implementation that offers a TaskService
    tasklist-ui Servlet that displays the tasklist using a TaskService
    tasklist-features Features descriptor for the application that makes installing in Karaf very easy

    Parent pom and general project setup

    The pom.xml is of packaging bundle and the maven-bundle-plugin creates the jar with an OSGi Manifest. By default the plugin imports all packages that are imported in java files or referenced in the blueprint context.

    It also exports all packages that do not contain the string impl or internal. In our case we want the model package to be imported but not the persistence.impl package. As the naming convention is used
    we need no additional configuration.


    This project contains the domain model in our case it is the Task class and a TaskService interface. The model is used by both the persistence implementation and the user interface.  Any user of the TaskService will only need the model. So it is never directly bound to our current implementation.


    The very simple persistence implementation TaskServiceImpl manages tasks in a simple HashMap. The class uses the @Singleton annotation to expose the class as an blueprint bean.

    The annotation @Service will expose the bean as an OSGi service and the properties attribute allows to add serice properties. In our case the property service.exported.interfaces we set can be used by CXF-DOSGi which we present  in a later tutorial. For this tutorial the properties could also be removed.

        properties= {
    		@ServiceProperty(name = "service.exported.interfaces", values = "*")
    public class TaskServiceImpl implements TaskService {

    The blueprint-maven-plugin will process the class above and automatically create the suitable blueprint xml. So this saves us from writing blueprint xml by hand.

    Automatically created blueprint xml can be found in target/generated-resources
    <blueprint xmlns="">
    	<bean id="taskService" class="" />
    	<service ref="taskService" interface="" />


    The ui project contains a small servlet TaskServlet to display the tasklist and individual tasks. To work with the tasks the servlet needs the TaskService. We inject the TaskService by using the annotation @Inject which is able to inject any bean by type and the annotation @OsgiService which creates a blueprint reference to an OSGiSerivce of the given type.

    The whole class is exposed as an OSGi service of interface java.http.Servlet with a special property alias=/tasklist. This triggers the whiteboard extender of pax web which picks up the service and exports it as a servlet at the relative url /tasklist.

    Snippet of the relevant code:

    @Service(classes = Servlet.class,
        properties = {
    		@ServiceProperty(name = "osgi.http.whiteboard.servlet.pattern", values = "/tasklist")
    public class TaskListServlet extends HttpServlet {
        @Inject @OsgiService
        TaskService taskService;
    Automatically created blueprint xml can be found in target/generated-resources
    <blueprint xmlns="">
    	<reference id="taskService" availability="mandatory" interface="" />
    	<bean id="taskServlet" class="">
    		<property name="taskService" ref="taskService"></property>
    	<service ref="taskServlet" interface="javax.servlet.http.HttpServlet">
    			<entry key="alias" value="/tasklist" />

    See also:


    The last project only installs a feature descriptor to the maven repository so we can install it easily in Karaf. The descriptor defines a feature named tasklist and the bundles to be installed from
    the maven repository.

    <feature name="example-tasklist-persistence" version="${pom.version}">
    <feature name="example-tasklist-ui" version="${pom.version}">

    A feature can consist of other features that also should be installed and bundles to be installed. The bundles typically use mvn urls. This means they are loaded from the configured maven repositories or your local maven repositiory in ~/.m2/repository.

    Installing the Application in Karaf

    feature:install example-tasklist-persistence example-tasklist-ui

    Add the features descriptor to Karaf so it is added to the available features, then Install and start the tasklist feature. After this command the tasklist application should run


    Check that all bundles of tasklist are active. If not try to start them and check the log.

    ID | Servlet         | Servlet-Name   | State       | Alias     | Url
    56 | TaskListServlet | ServletModel-2 | Deployed    | /tasklist | [/tasklist/*]

    Should show the TaskListServlet. By default the example will start at http://localhost:8181/tasklist .

    You can change the port by creating aa text file in "etc/org.ops4j.pax.web.cfg" with the content "org.osgi.service.http.port=8080". This will tell the HttpService to use the port 8080. Now the tasklist application should be available at http://localhost:8080/tasklist


    In this tutorial we have installed Karaf and learned some commands. Then we created a small OSGi application that shows servlets, OSGi services, blueprint and the whiteboard pattern.

    In the next tutorial we take a look at using Apache Camel and Apache CXF on OSGi.

    Back to Karaf Tutorials

    0 0