Quantcast
Channel: Planet Apache
Viewing all 9364 articles
Browse latest View live

Bryan Pendleton: All Over the Place: a very short review

$
0
0

Is it possible that I am the first person to tell you about Geraldine DeRuiter's new book: All Over the Place: Adventures in Travel, True Love, and Petty Theft?

If I am, then yay!

For this is a simply wonderful book, and I hope everybody finds out about it.

As anyone who has read the book (or has read her blog) knows, DeRuiter can be just screamingly funny. More than once I found myself causing a distraction on the bus, as my fellow riders wondered what was causing me to laugh out loud. Many books are described as "laugh out loud funny," but DeRuiter's book truly is.

Much better, though, are the parts of her book that aren't the funny parts, for it is here where her writing skills truly shine.

DeRuiter is sensitive, perceptive, honest, and caring. Best of all, however, is that she is able to write about affairs of the heart in a way that is warm and generous, never cloying or cringe-worthy.

So yes: you'll laugh, you'll cry, you'll look at the world with fresh new eyes. What more could you want from a book?

All Over the Place is all of that.


Justin Mason: Links for 2017-06-20

$
0
0

Justin Mason: Links for 2017-06-21

$
0
0

Colm O hEigeartaigh: SSO support for Apache Syncope REST services

$
0
0
Apache Syncope has recently added SSO support for its REST services in the 2.0.3 release. Previously, access to the REST services of Syncope was via HTTP Basic Authentication. From the 2.0.3 release, SSO support is available using JSON Web Tokens (JWT). In this post, we will look at how this works and how it can be configured.

1) Obtaining an SSO token from Apache Syncope

As stated above, in the past it was necessary to supply HTTP Basic Authentication credentials when invoking on the REST API. Let's look at an example using curl. Assume we have a running Apache Syncope instance with a user "alice" with password "ecila". We can make a GET request to the user self service via:
  • curl -u alice:ecila http://localhost:8080/syncope/rest/users/self
It may be inconvenient to supply user credentials on each request or the authentication process might not scale very well if we are authenticating the password to a backend resource. From Apache Syncope 2.0.3, we can instead get an SSO token by sending a POST request to "accessTokens/login" as follows:
  • curl -I -u alice:ecila -X POST http://localhost:8080/syncope/rest/accessTokens/login
The response contains two headers:
  • X-Syncope-Token: A JWT token signed according to the JSON Web Signature (JWS) spec.
  • X-Syncope-Token-Expire: The expiry date of the token
The token in question is signed using the (symmetric) "HS512" algorithm. It contains the subject "alice" and the issuer of the token ("ApacheSyncope"), as well as a random token identifier, and timestamps that indicate when the token was issued, when it expires, and when it should not be accepted before.

The signing key and the issuer name can be changed by editing 'security.properties' and specifying new values for 'jwsKey' and 'jwtIssuer'. Please note that it is critical to change the signing key from the default value! It is also possible to change the signature algorithm from the next 2.0.4 release via a custom 'securityContext.xml' (see here). The default lifetime of the token (120 minutes) can be changed via the "jwt.lifetime.minutes" configuration property for the domain.

2) Using the SSO token to invoke on a REST service

Now that we have an SSO token, we can use it to invoke on a REST service instead of specifying our username and password as before, e.g.:
  • curl -H "X-Syncope-Token: eyJ0e..." http://localhost:8080/syncope/rest/users/self
The signature is first checked on the token, then the issuer is verified so that it matches what is configured, and then the expiry and not-before dates are checked. If the identifier matches that of a saved access token then authentication is successful.

Finally, SSO tokens can be seen in the admin console under "Dashboard/Access Token", where they can be manually revoked by the admin user:


Justin Mason: Links for 2017-06-22

$
0
0

Chiradeep Vittal: Design patterns in orchestrators: transfer of desired state (part 3/N)

$
0
0

Most datacenter automation tools operate on the basis of desired state. Desired state describes what should be the end state but not how to get there. To simplify a great deal, if the thing being automated is the speed of a car, the desired state may be “60mph”. How to get there (braking, accelerator, gear changes, turbo) isn’t specified. Something (an “agent”) promises to maintain that desired speed.

desiredstate

The desired state and changes to the desired state are sent from the orchestrator to various agents in a datacenter. For example, the desired state may be “two apache containers running on host X”. An agent on host X will ensure that the two containers are running. If one or more containers die, then the agent on host X will start enough containers to bring the count up to two. When the orchestrator changes the desired state to “3 apache containers running on host X”, then the agent on host X will create another container to match the desired state.

Transfer of desired state is another way to achieve idempotence (a problem described here)

We can see that there are two sources of changes that the agent has to react to:

  1. changes to desired state sent from the orchestrator and
  2. drift in the actual state due to independent / random events.

Let’s examine #1 in greater detail. There’s a few ways to communicate the change in desired state:

  1. Send the new desired state to the agent (a “command” pattern). This approach works most of the time, except when the size of the state is very large. For instance, consider an agent responsible for storing a million objects. Deleting a single object would involve sending the whole desired state (999999 items). Another problem is that the command may not reach the agent (“the network is not reliable”). Finally, the agent may not be able to keep up with rate of change of desired state and start to drop some commands.  To fix this issue, the system designer might be tempted to run more instances of the agent; however, this usually leads to race conditions and out-of-order execution problems.
  2. Send just the delta from the previous desired state. This is fraught with problems. This assumes that the controller knows for sure that the previous desired state was successfully communicated to the agent, and that the agent has successfully implemented the previous desired state. For example, if the first desired state was “2 running apache containers” and the delta that was sent was “+1 apache container”, then the final actual state may or may not be “3 running apache containers”. Again, network reliability is a problem here. The rate of change is an even bigger potential problem here: if the agent is unable to keep up with the rate of change, it may drop intermediate delta requests. The final actual state of the system may be quite different from the desired state, but the agent may not realize it! Idempotence in the delta commands helps in this case.
  3. Send just an indication of change (“interrupt”). The agent has to perform the additional step of fetching the desired state from the controller. The agent can compute the delta and change the actual state to match the delta. This has the advantage that the agent is able to combine the effects of multiple changes (“interrupt debounce”). By coalescing the interrupts, the agent is able to limit the rate of change. Of course the network could cause some of these interrupts to get “lost” as well. Lost interrupts can cause the actual state to diverge from the desired state for long periods of time. Finally, if the desired state is very large, the agent and the orchestrator have to coordinate to efficiently determine the change to the desired state.
  4. The agent could poll the controller for the desired state. There is no problem of lost interrupts; the next polling cycle will always fetch the latest desired state. The polling rate is critical here: if it is too fast, it risks overwhelming the orchestrator even when there are no changes to the desired state; if too slow, it will not converge the the actual state to the desired state quickly enough.

To summarize the potential issues:

  1. The network is not reliable. Commands or interrupts can be lost or agents can restart / disconnect: there has to be some way for the agent to recover the desired state
  2. The desired state can be prohibitively large. There needs to be some way to efficiently but accurately communicate the delta to the agent.
  3. The rate of change of the desired state can strain the orchestrator, the network and the agent. To preserve the stability of the system, the agent and orchestrator need to coordinate to limit the rate of change, the polling rate and to execute the changes in the proper linear order.
  4. Only the latest desired state matters. There has to be some way for the agent to discard all the intermediate (“stale”) commands and interrupts that it has not been able to process.
  5. Delta computation (the difference between two consecutive sets of desired state) can sometimes be more efficiently performed at the orchestrator, in which case the agent is sent the delta. Loss of the delta message or reordering of execution can lead to irrecoverable problems.

A persistent message queue can solve some of these problems. The orchestrator sends its commands or interrupts to the queue and the agent reads from the queue. The message queue buffers commands or interrupts while the agent is busy processing a desired state request.  The agent and the orchestrator are nicely decoupled: they don’t need to discover each other’s location (IP/FQDN). Message framing and transport are taken care of (no more choosing between Thrift or text or HTTP or gRPC etc).

messageq

There are tradeoffs however:

  1. With the command pattern, if the desired state is large, then the message queue could reach its storage limits quickly. If the agent ends up discarding most commands, this can be quite inefficient.
  2. With the interrupt pattern, a message queue is not adding much value since the agent will talk directly to the orchestrator anyway.
  3. It is not trivial to operate / manage / monitor a persistent queue. Messages may need to be aggressively expired / purged, and the promise of persistence may not actually be realized. Depending on the scale of the automation, this overhead may not be worth the effort.
  4. With an “at most once” message queue, it could still lose messages. With  “at least once” semantics, the message queue could deliver multiple copies of the same message: the agent has to be able to determine if it is a duplicate. The orchestrator and agent still have to solve some of the end-to-end reliability problems.
  5. Delta computation is not solved by the message queue.

OpenStack (using RabbitMQ) and CloudFoundry (using NATS) have adopted message queues to communicate desired state from the orchestrator to the agent.  Apache CloudStack doesn’t have any explicit message queues, although if one digs deeply, there are command-based message queues simulated in the database and in memory.

Others solve the problem with a combination of interrupts and polling – interrupt to execute the change quickly, poll to recover from lost interrupts.

Kubernetes is one such framework. There are no message queues, and it uses an explicit interrupt-driven mechanism to communicate desired state from the orchestrator (the “API Server”) to its agents (called “controllers”).

Courtesy of Heptio

(Image courtesy: https://blog.heptio.com/core-kubernetes-jazz-improv-over-orchestration-a7903ea92ca)

Developers can use (but are not forced to use) a controller framework to write new controllers. An instance of a controller embeds an “Informer” whose responsibility is to watch for changes in the desired state and execute a controller function when there is a change. The Informer takes care of caching the desired state locally and computing the delta state when there are changes. The Informer leverages the “watch” mechanism in the Kubernetes API Server (an interrupt-like system that delivers a network notification when there is a change to a stored key or value). The deltas to the desired state are queued internally in the Informer’s memory. The Informer ensures the changes are executed in the correct order.

  • Desired states are versioned, so it is easier to decide to compute a delta, or to discard an interrupt.
  • The Informer can be configured to do a periodic full resync from the orchestrator (“API Server”) – this should take care of the problem of lost interrupts.
  • Apparently, there is no problem of the desired state being too large, so Kubernetes does not explicitly handle this issue.
  • It is not clear if the Informer attempts to rate-limit itself when there are excessive watches being triggered.
  • It is also not clear if at some point the Informer “fast-forwards” through its queue of changes.
  • The watches in the API Server use Etcdwatches in turn. The watch server in the API server only maintains a limited set of watches received from Etcd and discards the oldest ones.
  • Etcd itself is a distributed data store that is more complex to operate than say, an SQL database. It appears that the API server hides the Etcd server from the rest of the system, and therefore Etcd could be replaced with some other store.

I wrote a Network Policy Controller for Kubernetes using this framework and it was the easiest integration I’ve written.

It is clear that the Kubernetes creators put some thought into the architecture, based on their experiences at Google. The Kubernetes design should inspire other orchestrator-writers, or perhaps, should be re-used for other datacenter automation purposes. A few issues to consider:

  • The agents (“controllers”) need direct network reachability to the API Server. This may not be possible in all scenarios, needing another level of indirection
  • The API server is not strictly an orchestrator, it is better described as a choreographer. I hope to describe this difference in a later blog post, but note that the API server never explicitly carries out a step-by-step flow of operations.

Justin Mason: Links for 2017-06-24

Colm O hEigeartaigh: Securing Apache Solr - part I

$
0
0
This is the first post in a series of articles on securing Apache Solr. In this post we will look at deploying an example SolrCloud instance and securing access to it via basic authentication.

1) Install and deploy a SolrCloud example

Download and extract Apache Solr (6.6.0 was used for the purpose of this tutorial). Now start SolrCloud via:
  • bin/solr -e cloud
Accept all of the default options. This creates a cluster of two nodes, with a collection "gettingstarted" split into two shards and two replicas per-shard. A web interface is available after startup at: http://localhost:8983/solr/.

Once the cluster is up and running we can post some data to the collection we have created via the REST interface:
  • curl http://localhost:8983/solr/gettingstarted/update -d '[ {"id" : "book1", "title_t" : "The Merchant of Venice", "author_s" : "William Shakespeare"}]'
  • curl http://localhost:8983/solr/gettingstarted/update -d '[ {"id" : "book2", "title_t" : "Macbeth", "author_s" : "William Shakespeare"}]'
  • curl http://localhost:8983/solr/gettingstarted/update -d '[ {"id" : "book3", "title_t" : "Death of a Salesman", "author_s" : "Arthur Miller"}]'
We can search the REST interface to for example return all entries by William Shakespeare as follows:
  • curl http://localhost:8983/solr/gettingstarted/query?q=author_s:William+Shakespeare
2) Authenticating users to our SolrCloud instance

Now that our SolrCloud instance is up and running, let's look at how we can secure access to it, by using HTTP Basic Authentication to authenticate our REST requests. Download the following security configuration which enables Basic Authentication in Solr:
Two users are defined - "alice" and "bob" - both with password "SolrRocks". Now upload this configuration to the Apache Zookeeper instance that is running with Solr:
  • server/scripts/cloud-scripts/zkcli.sh -zkhost localhost:9983 -cmd putfile /security.json security.json
Now try to run the search query above again using Curl. A 401 error will be returned. Once we specify the correct credentials then the request will work as expected, e.g.:
  • curl -u alice:SolrRocks http://localhost:8983/solr/gettingstarted/query?q=author_s:Arthur+Miller

Justin Mason: Links for 2017-06-26

$
0
0

Bryan Pendleton: Those were the days ...

$
0
0

A dear colleague of mine tracked down this wonderful picture of me with one of my favorite engineering teams:

How wonderful it was to have such brilliant co-workers to learn from!

If, in some magical way, 56-year-old Bryan could go back 23 years and talk to 33-year-old Bryan, what would I say?

Maybe something like:

Pay attention, keep listening, and work hard: you've still got a LOT left to learn.

Of course, that's just as true now.

Hmmm, maybe I just got some words of wisdom from 79-year-old Bryan, far off in the future?

As for the picture, as I recall, we were looking for a theme for a team picture, and Nat and Brian both happened to be wearing their leather coats that day, and so somebody (Rich? Ken?) suggested we all get our coats and sunglasses and "look tough". So we did...

Colm O hEigeartaigh: Securing Apache Solr - part II

$
0
0
This is the second post in a series of articles on securing Apache Solr. The first post looked at setting up a sample SolrCloud instance and securing access to it via Basic Authentication. In this post we will temporarily deviate from the concept of "securing Apache Solr", and instead look at how the Apache Ranger admin service can be configured to store audit information in Apache Solr.

1) Download and extract the Apache Ranger admin service

The first step is to download the source code, as well as the signature file and associated message digests (all available on the download page). Verify that the signature is valid and that the message digests match. Now extract and build the source, and copy the resulting admin archive to a location where you wish to install the UI:
  • tar zxvf apache-ranger-incubating-1.0.0.tar.gz
  • cd apache-ranger-incubating-1.0.0
  • mvn clean package assembly:assembly 
  • tar zxvf target/ranger-1.0.0-admin.tar.gz
  • mv ranger-1.0.0-admin ${rangerhome}
2) Install MySQL

The Apache Ranger Admin UI requires a database to keep track of users/groups as well as policies for various big data projects that you are securing via Ranger. For the purposes of this tutorial, we will use MySQL. Install MySQL in $SQL_HOME and start MySQL via:
  • sudo $SQL_HOME/bin/mysqld_safe --user=mysql
Now you need to log on as the root user and create two users for Ranger. We need a root user with admin privileges (let's call this user "admin") and a user for the Ranger Schema (we'll call this user "ranger"):
  • CREATE USER 'admin'@'localhost' IDENTIFIED BY 'password';
  • GRANT ALL PRIVILEGES ON * . * TO 'admin'@'localhost' WITH GRANT OPTION;
  • CREATE USER 'ranger'@'localhost' IDENTIFIED BY 'password';
  • FLUSH PRIVILEGES;
Finally,  download the JDBC driver jar for MySQL and put it in ${rangerhome}.

3) Configure Apache Solr to support auditing from Ranger

Before installing the Apache Ranger admin service we will need to configure Apache Solr. The Apache Ranger admin service ships with a script to make this easier to configure. Edit 'contrib/solr_for_audit_setup/install.properties' with the following properties:
  • SOLR_USER/SOLR_GROUP - the user/group you are running solr as
  • SOLR_INSTALL_FOLDER - Where you have extracted Solr to as per the first tutorial.
  • SOLR_RANGER_HOME - Where to install the Ranger configuration for Solr auditing.
  • SOLR_RANGER_PORT - The port to be used (8983 as per the first tutorial).
  • SOLR_DEPLOYMENT - solrcloud
  • SOLR_HOST_URL - http://localhost:8983
  • SOLR_ZK - localhost:2181
Make sure that the user running Solr has permission to write to the value configured for "SOLR_LOG_FOLDER" (/var/log/solr/ranger_audits). Now in 'contrib/solr_for_audit_setup' run 'sudo -E ./setup.sh'. The Solr configuration is now copied to $SOLR_RANGER_HOME.

4) Start Apache Zookeeper and SolrCloud

Before starting Apache Solr we will need to start Apache Zookeeper. Download Apache Zookeeper and start it on port 2181 via (this step was not required in the previous tutorial as we were launching SolrCloud with an embedded Zookeeper instance):
  • bin/zkServer.sh start
As per the first post, we want to secure access to SolrCloud via Basic Authentication (note that this is only recently fixed in Apache Ranger). So follow the steps in this post to upload the security.json to Zookeeper via:
  • server/scripts/cloud-scrip/zkcli.sh -zkhost localhost:2181 -cmd putfile /security.json security.json
Start Solr as follows in the '${SOLR_RANGER_HOME}/ranger_audit_server/scripts' directory:
  • ./add_ranger_audits_conf_to_zk.sh 
  • ./start_solr.sh
Edit 'create_ranger_audits_collection.sh' and change 'curl --negotiate -u :' to 'curl -u "alice:SolrRocks"'. Save it and then run:
  • ./create_ranger_audits_collection.sh
5) Install the Apache Ranger Admin UI

Edit ${rangerhome}/install.properties and make the following changes:
  • Change SQL_CONNECTOR_JAR to point to the MySQL JDBC driver jar that you downloaded above.
  • Set (db_root_user/db_root_password) to (admin/password)
  • Set (db_user/db_password) to (ranger/password)
  • audit_solr_urls: http://localhost:8983/solr/ranger_audits
  • audit_solr_user: alice
  • audit_solr_password: SolrRocks
  • audit_solr_zookeepers: localhost:2181
Now you can run the setup script via "sudo -E ./setup.sh". When this is done then start the Apache Ranger admin service via: "sudo ranger-admin start".

6) Test that auditing is working correctly in the Ranger Admin service

Open a browser and navigate to "http://localhost:6080". Try to log on first using some made up credentials. Then log in using "admin/admin". Click on the "Audit" tab and then select "Login Sessions". You should see the incorrect and the correct login attempts, meaning that ranger is successfully storing and retrieving audit information in Solr:


Ortwin Glück: [Code] fix Stack Clash on Gentoo

$
0
0
The Stack Clash class of bugs can be easily prevented on Gentoo.

1. Add -fstack-check to your CFLAGS. It instructs the compiler to touch every page when extending the stack by more than one page. So the kernel will trap in the guard page. This even makes the larger stack gap in recent kernels unnecessary (if you don't run other binaries)

/etc/portage/make.conf:
CFLAGS="-march=native -O2 -pipe -fstack-check"

2. Recompile important libraries (like openssl) and programs (setuid root binaries in shadow and util-linux) or simply everything: emerge -ae world

As always, keep your system uptodate regularly: emerge -uavD world

Matt Raible: Devoxx Poland: A Huge Conference in a Beautiful City

$
0
0

It's been a little over six years since I first ventured to Kraków, Poland. I have fond memories of that trip, mostly because Trish was with me and we explored lots of sites. Last month, I visited Kraków for GeeCON, but only stayed for one night.

Last week, I had the pleasure of visiting a third time for my first Devoxx Poland. I was excited to travel internationally again with my favorite travel shirt on. This caused a funny conversation with TSA just before my departure.

Heading to the airport in my favorite travel shirt

I arrived in Krakow on a beautiful day and took an Ubër to my hotel next to the venue. I took a stroll along the Vistula River to enjoy the sunshine.

Along the Vistula RiverGorgeous day for a stroll in Krakow

A beautiful day in Krakow

I attended the conference happy hour that evening, then journeyed to a local restaurant for some delicious food and fun conversations. There's a chance one of those conversations inspires a speaking tour in South Africa next year.

On Thursday, I attended a couple sessions on application and microservices security, and delivered my talk on PWAs with Ionic, Angular, and Spring Boot. You can check out my presentation below or on Speaker Deck.

I was amazed at the sheer size of Devoxx Poland! Not only was the venue massive, but its 2500 attendees filled it up quickly. Thursday evening was the speaker's dinner, on a boat no less! It was a great location, with lots of familiar faces.

Friday, I spoke about What's New in JHipsterLand. My presentation can be viewed below or on Speaker Deck. You can download the PDF from Speaker Deck for clickable links.

In other news, I've been busy writing blog posts for the @OktaDev blog.

I also wrote an article for scotch.io titled "The Ultimate Guide to Progressive Web Applications."

For the next couple of weeks, I'll be on vacation in Montana. Then it's time for ÜberConf, vJUG (with Josh Long!), and Oktane17.

If I don't see you on the rivers in Montana or at an upcoming conference, I hope you have a great summer!

Justin Mason: Links for 2017-06-27

$
0
0
  • RIPE Atlas Probes

    Interesting! We discussed similar ideas in $prevjob, good to see one hitting production globally.

    RIPE Atlas probes form the backbone of the RIPE Atlas infrastructure. Volunteers all over the world host these small hardware devices that actively measure Internet connectivity through ping, traceroute, DNS, SSL/TLS, NTP and HTTP measurements. This data is collected and aggregated by the RIPE NCC, which makes the data publicly available. Network operators, engineers, researchers and even home users have used this data for a wide range of purposes, from investigating network outages to DNS anycasting to testing IPv6 connectivity. Anyone can apply to host a RIPE Atlas probe. If your application is successful (based on your location), we will ship you a probe free of charge. Hosts simply need to plug their probe into their home (or other) network. Probes are USB-powered and are connected to an Ethernet port on the host’s router or switch. They then automatically and continuously perform active measurements about the Internet’s connectivity, and this data is sent to the RIPE NCC, where it is aggregated and made publicly available. We also use this data to create several Internet maps and data visualisations. [….] The hardware of the first and second generation probes is a Lantronix XPort Pro module with custom powering and housing built around it. The third generation probe is a modified TP-Link wireless router (model TL-MR 3020) with a small USB thumb drive in it, but this probe does not support WiFi.
    (via irldexter)

    (tags: via:irldexterripenccprobingactive-monitoringnetworkingpingtraceroutednstestinghttpipv6anycasthardwaredevicesisps)

  • “BBC English” was invented by a small team in the 1920s & 30s

    Excellent twitter thread:

    Today we speak of “BBC English” as a standard form of the language, but this form had to be invented by a small team in the 1920s & 30s. 1/ It turned out even within the upper-class London accent that became the basis for BBC English, many words had competing pronunciations. 2/ Thus in 1926, the BBC’s first managing director John Reith established an “Advisory Committee on Spoken English” to sort things out. 3/ The committee was chaired by Irish playwright George Bernard Shaw, and also included American essayist Logan Pearsall Smith, 4/ novelist Rose Macaulay, lexicographer (and 4th OED editor) C.T. Onions, art critic Kenneth Clark, journalist Alistair Cooke, 5/ ghost story writer Lady Cynthia Asquith, and evolutionary biologist and eugenicist Julian Huxley. 6/ The 20-person committee held fierce debates, and pronunciations now considered standard were often decided by just a few votes.

    (tags: bbclanguageenglishhistoryrpreceived-pronunciationpronunciationjohn-reith)

Bryan Pendleton: Silicon Valley title sequence

$
0
0

Here's a very nice detailed breakdown of all the inside-the-Valley references, barbs, cultural references, and other easy-to-miss aspects of the title sequence of HBO's Silicon Valley TV series.


Bryan Pendleton: Aviatrix: a very short review

$
0
0

My daughter returned from a writer's convention having met Mary Bush Shipko, and having bought her book: AVIATRIX: First Woman Pilot for Hughes Airwest, as a gift for me.

I classify Aviatrix as a memoir, by which I mean it broadly follows this recipe:

  • The author tells various stories, of the author's choosing, and in his or her own words, about events that seemed particularly important in the author's life
  • The author explains a bit about why he or she chose these stories, and what sort of lessons you might learn from considering the author's experiences
  • The author hopes you will enjoy learning about "what it was like," which is a valuable goal, as things change so quickly nowadays.

Aviatrix certainly covers a lot of this ground.

The early years of aviation in America, starting soon after World War II ended, and developing rapidly through the 1950's and 1960's, were a very interesting time in the development of the country, and it's hard not to find this history compelling.

Shipko had a fascinating early adulthood, living in southern Florida, learning to be a pilot at a young age, flying all sorts of fascinating trips around Florida and the Caribbean. Some of these stories were delightful, such as making cargo runs to the Bahamas to pick up crates of cucumber and zucchini, or having to make a very careful landing at the airport on the Abacos Islands because there was a building on fire at the far end of the runway.

But the later part of the book, while still telling the story of Shipko's aviation history, takes a dramatic turn.

Once she mastered the cargo planes and short-hop delivery routes of south Florida, she moved on to become a jet pilot and was, as she notes, the First Woman Pilot for Hughes Airwest, a remarkable achievement.

Sadly, though, the toll it took on her was immense. Plagued by bitter co-workers and a severely polarized and caustic work environment, she endured nearly a decade of hostility and harassment until she eventually was forced out of her chosen career, simply because she was "a woman doing a man's job."

The second half of the book is not easy to read. Some of the anger she faced was searing, and it clearly still burns, 40 years later.

None of this will be much of a surprise to anyone who paid even the slightest bit of attention to events such as the Tailhook Scandal, and it won't be much of a spoiler to reveal that things still haven't much changed.

I admire Shipko for speaking up, for writing her book, for telling her story. Although it certainly doesn't qualify as "light summer reading," it was undeniably fascinating.

Justin Mason: Links for 2017-06-28

$
0
0
  • Mozilla Employee Denied Entry to the United States

    Ugh. every non-USian tech worker’s nightmare. curl developer Daniel Stenberg:

    “I can’t think of a single valid reason why they would deny me travel, so what concerns me is that somehow someone did and then I’m worried that I’ll get trouble fixing that issue,” Stenberg said. “I’m a little worried since border crossings are fairly serious matters and getting trouble to visit the US in the future would be a serious blowback for me, both personally with friends and relatives there, and professionally with conferences and events there.”

    (tags: curltravelmozillaestaus-politicsusaimmigrationflying)

Ortwin Glück: [Code] How to fix broken image preview in Gentoo KDE

$
0
0
Due to recent updates your folder preview of images in KDE Dolphin is probably broken. The culprit is the exiv2 library (which in itself is a major problem).

To fix that, rebuild exiv2 first: emerge -1av exiv2

Then check what uses it: revdep-rebuild -pL libexiv2

And recompile that: emerge -1av kde-apps/libkexiv2 kde-apps/kio-extras kfilemetadata gwenview

Colm O hEigeartaigh: Securing Apache Solr - part III

$
0
0
This is the third post in a series of articles on securing Apache Solr. The first post looked at setting up a sample SolrCloud instance and securing access to it via Basic Authentication. The second post looked at how the Apache Ranger admin service can be configured to store audit information in Apache Solr. In this post we will extend the example in the first article to include authorization, by showing how to create and enforce authorization policies using Apache Ranger.

1) Install the Apache Ranger Solr plugin

The first step is to install the Apache Ranger Solr plugin. Download Apache Ranger and verify that the signature is valid and that the message digests match. Now extract and build the source, and copy the resulting plugin to a location where you will configure and install it:
  • mvn clean package assembly:assembly -DskipTests
  • tar zxvf target/ranger-${version}-solr-plugin.tar.gz
  • mv ranger-${version}-solr-plugin ${ranger.solr.home}
Now go to ${ranger.solr.home} and edit "install.properties". You need to specify the following properties:
  • POLICY_MGR_URL: Set this to "http://localhost:6080"
  • REPOSITORY_NAME: Set this to "solr_service".
  • COMPONENT_INSTALL_DIR_NAME: The location of your Apache Solr server directory
Save "install.properties" and install the plugin as root via "sudo -E ./enable-solr-plugin.sh". Make sure that the user who is running Solr can read the "/etc/ranger/solr_service/policycache". Now follow the first tutorial to get an example SolrCloud instance up and running with a "gettingstarted" collection. We will not enable the authorization plugin just yet.

2) Create authorization policies for Solr using the Apache Ranger Admin service

Now follow the second tutorial to download and install the Apache Ranger admin service. To avoid conflicting with the Solr example we are securing, we will skip the section about auditing to Apache Solr (sections 3 and 4). In addition, in section 5 the "audit_store" property can be left empty, and the Solr audit properties can be omitted. Start the Apache Ranger admin service via: "sudo ranger-admin start", and open a browser at "http://localhost:6080", logging on with "admin/admin" credentials. Click on the "+" button for the Solr service and create a new service with the following properties:
  • Service Name: solr_service
  • Username: alice
  • Password: SolrRocks
  • Solr URL: http://localhost:8983/solr
Hit the "Test Connection" button and it should show that it has successfully connected to Solr. Click "Add" and then click on the "solr_service" link that is subsequently created. We will grant a policy that allows "alice" the ability to read the "gettingstarted" collection. If "alice" is not already created, go to "Settings/User+Groups" and create a new user there. Delete the default policy that is created in the "solr_service" and then click on "Add new policy" and create a new policy called "gettingstarted_policy". For "Solr Collection" enter "g" here and the "gettingstarted" collection should pop up. Add a new "allow condition" granting the user "alice" the "others" and "query" permissions.




3) Test authorization using the Apache Ranger plugin for Solr

Now we are ready to enable the Apache Ranger authorization plugin for Solr. Download the following security configuration which enables Basic Authentication in Solr as well as the Apache Ranger authorization plugin:
Now upload this configuration to the Apache Zookeeper instance that is running with Solr:
  • server/scripts/cloud-scripts/zkcli.sh -zkhost localhost:9983 -cmd putfile /security.json security.json
 Now let's try to query the "gettingstarted" collection as 'alice':
  • curl -u alice:SolrRocks http://localhost:8983/solr/gettingstarted/query?q=author_s:Arthur+Miller
This should be successful. However, authorization will fail for the case of "bob":
  • curl -u bob:SolrRocks http://localhost:8983/solr/gettingstarted/query?q=author_s:Arthur+Miller
In addition, although "alice" can query the collection, she can't write to it, and the following query will return 403:
  • curl -u alice:SolrRocks http://localhost:8983/solr/gettingstarted/update -d '[ {"id" : "book4", "title_t" : "Hamlet", "author_s" : "William Shakespeare"}]'

Justin Mason: Links for 2017-06-30

$
0
0
  • Don’t Settle For Eventual Consistency

    Quite an argument. Not sure I agree, but worth a bookmark anyway…

    With an AP system, you are giving up consistency, and not really gaining anything in terms of effective availability, the type of availability you really care about.  Some might think you can regain strong consistency in an AP system by using strict quorums (where the number of nodes written + number of nodes read > number of replicas).  Cassandra calls this “tunable consistency”.  However, Kleppmann has shown that even with strict quorums, inconsistencies can result.10  So when choosing (algorithmic) availability over consistency, you are giving up consistency for not much in return, as well as gaining complexity in your clients when they have to deal with inconsistencies.

    (tags: cap-theoremdatabasesstoragecapconsistencycpapeventual-consistency)

  • Delivering Billions of Messages Exactly Once · Segment Blog

    holy crap, this is exactly the wrong way to build a massive-scale deduplication system — with a monster random-access “is this random UUID in the db” lookup

    (tags: dedupingarchitecturehorrorsegmentmessagingkafka)

Viewing all 9364 articles
Browse latest View live




Latest Images