Quantcast
Channel: Planet Apache
Viewing all 9364 articles
Browse latest View live

Bryan Pendleton: Plato at the Googleplex: a very short review

$
0
0

I happened to dig down through the stack and found Rebecca Goldstein's Plato at the Googleplex: Why Philosophy Won't Go Away.

Not that I, personally, was all that worried that Philosophy was going to go away.

But this is, obviously, a book for people who are interested in Philosophy, of whom there are two sorts:

  1. People who pursue, or who have pursued, Philosophy as an academic discipline.
  2. People who have a casual interest in Philosophy, and who were assigned, say, parts of The Republic during high school, or who took "Great Western Philosophers" as an elective in college

Myself, I'm more in the latter category.

Anyway, Goldstein is attempting to write for both audiences, which is rather a challenge.

The way she handles this is to, more-or-less, alternate the chapters in her book between audience one and audience two.

For audience one, there are chapters dense with an assessment of current academic views on Philosophy in general, and on how Plato's thinking is currently received, in particular.

There are lots of footnotes in those chapters.

And passages like

In the Thaetetus, Plato moves (though somewhat jerkily) toward the definition of knowledge as "true belief with a logos," an account. This is a first approximation to a definition that philosophers would eventually give: knowledge is justified true belief. The same true proposition that is merely believed by one person can be genuinely known by another, and the difference lies in the reasons the believer has for believing. The reasons have to be good ones, providing justification for his belief, making it a rational belief. These are all evaluative notions. The definition of knowledge forces a further question: what counts as good reasons? All of these are questions that make up the field of epistemology, and they are questions Plato raised.

Which, if you're in audience one, is probably just what you were looking for!

In the other chapters, aimed more at audience two I guess, Goldstein tries a different approach, in which she imagines that Plato were somehow magically alive today, 2,500 years later, wandering around in his toga, carrying a laptop computer, and interacting with various people.

The title of the book comes from one of these chapters, in which Goldstein describes Plato's visit to the headquarters campus of Google (the "Googleplex"), where Plato is to give a speech for an audience of Google employees.

Other such chapters imagine Plato appearing on a cable talk show segment, Plato in a town hall forum at the 92nd Street YMCA in Manhattan, Plato assisting with the answers on the Ask Margo website, and Plato participating in a MRI brain-scanning experiment.

It's a clever idea, but terribly hard to pull off; Goldstein does better than I anticipated, and surely much better than I would have done myself.

But it's still pretty contrived.

I guess the bottom line is that it's an interesting book.

If you are interested in Plato, that is.


Justin Mason: Links for 2018-03-24

Claus Ibsen: 10 Years as Apache Camel committer

$
0
0
Yesterday, on March 25th 2018, I have been an Apache Camel committer for 10 years.


It all started with this first commit:

commit 5f0f55a4f14fe061e96eeca4cff60a1577cd5969
Author: Claus Ibsen
Date:   Tue Mar 25 20:07:10 2008 +0000

    Added unit test for mistyped URI

It was a very good first commit to add an unit test that tested a condition for invalid configuration in the camel-mina component.
Disclaimer: I wrote this blog entry with the purpose of having a memory of this anniversary I can look back to in the future. In doing so there is a bit of summary and numbers of my contributions over the years. This blog is a short summary of solely my work and contributions to the Apache Camel project and related work over the last 10 years. The Apache Camel community is larger than one individual and this blog post is not an attempt to over-shadow all the hard work done by the many other individuals in the greater Camel community. 
I got started with Apache Camel about 6 months earlier, in fall of 2017 where I and my team were look at various open source integration solutions. I have been working as full time committer since January 2009, so the earlier work was done in my spare time and as in my previous job.

On github my contributions to the Apache Camel project is shown as below:

So we can basically see that indeed I have been full time on the Apache Camel project, and worked on the project year after year.

Today there has been 109 releases of Apache Camel, and I have been directly involved as a committer starting from the Camel 1.3.0 release, that means 106 out of 109 releases. My first patch was submitted via JIRA ticket CAMEL-244 from November 2017. As you can see from the ticket I had attention to detail back then ;)

I have also been very active in the Camel community and helped people on various forums such as the Apache Camel user mailing list, where I have sent countless emails. In fact Nabble has a record of me with more than 15347 mails. In more recent year StackOverFlow has also become popular to use for getting help and ask questions around Apache Camel. I have 1619 answers, and build up a reputation at 42322 at StackOverFlow but they are included on other projects, so maybe its 1500 or so that is about Apache Camel.  I have also done 336 blog posts and 95+% of those are related to Apache Camel.

And also in those 10 years I co-authored two books on Apache Camel, and 2 reference cards, and have done countless public talks at conferences, Red Hat events, webinars, and privately at customer engagements.

A couple of years I started the Camel IDEA plugin project, to make Camel tooling awesome on IDEA. This project is now on the roadmap to evolve to include similar functionality into other editors such as Eclipse.

At Red Hat I have also enjoyed working on other projects that was related to Apache Camel, such as fabric8, hawtio, Vert.X Camel adapter, and the Fuse IDE editor. And Camel is included in ActiveMQ so I also spent time working on this project. I also spend time to make Camel work great with containers such as Kubernetes, and that Camel works awesome with Spring Boot etc.

So what I am saying is that maintaining an open source project is a lot of work - even if its your day time job. If you have a passion and goes the extra miles then you spend a lot more time on the project than a regular 8-4pm job.


Edward J. Yoon: 딥 러닝의 비밀을 파헤치는 새로운 이론

$
0
0
예루살렘 히브리 대학 (Hebrew University of Jerusalem)의 컴퓨터 과학자이자 물리학 교수인 티쉬비(Naftali Tishby)가 작년 베를린의 한 컨퍼런스에서 딥 러닝이 어떻게 작동 하는지를 설명하는 새로운 이론을 제시했다.

"deep learning is an information bottleneck procedure that compresses noisy data as much as possible while preserving information about what the data represent."

딥 러닝이란, 데이터의 노이즈는 줄이고 그것이 무엇을 표현 하는지에 대한 주요 정보만 남기는 정보 병목 (Information bottleneck) 절차라는 것이다.

정보 병목 기법 (Information bottleneck method) 은 두 랜덤변수의 결합확률분포가 주어진 경우, 두 변수 간의 상호 정보량을 최대한 보존하면서 한 변수를 압축하는 기법이다 (위키 참고 [1]).

위키에 Information theory of deep learning 섹션은 정리되다 말았지만, X가 실제 개 사진의 픽셀같은 복잡한 대량의 데이터 셋이고, Y가 “개”와 같이 그 데이터를 표현하는 단순한 변수라 가정해보자. 딥 러닝은 즉, Y의 정보를 최대한으로 보유하고 있는 X 의 축약된 표현을 구함으로써 일반화라는 목표에 도달하는 과정이라는 것이다.

내가 이 내용에 관심을 갖게 된 이유는, 딥 러닝 대부인 힌튼 교수께서 친히 티쉬비의 연구 결과에 찬사를 보냈다고 한다.

 “I have to listen to it another 10,000 times to really understand it, but it's very rare nowadays to hear a talk with a really original idea in it that may be the answer to a really major puzzle.”

한편으로 또한 재밌는 것은, 이 이론에 따르면 딥 러닝이 잘 할 수 있는 것과 못하는 것의 한계는 명백 해진다. 세부적인걸 포기하기 때문에 큰 수 곱셈이나 암호 코드 박살 내는건 잘 해내지 못할 거라고 한다 ㅋ.

1. https://en.wikipedia.org/wiki/Information_bottleneck_method

Community Over Code: How Apache Directors Run ASF Board Meetings

$
0
0

I was recently fortunate enough to be re-elected to the ASF’s Board of Directorsalong with 8 other excellent candidates. Since there were two new directors elected – Isabel and Roman – we plan to have returning directors work together to improve our documentation of how we run our board meetings so smoothly.

This is my personal timeline of how I volunteer as a director, in terms of our monthly board meetings (there are a lot of other things directors do too!).

I’ve written overviews about Apache governance and the board, our board meeting process, and I’m also a PMC Member on the Apache Whimsy project that builds the tools that automate our meetings. So there’s plenty of overviews, but we don’t yet have a detailed written description of how a new director can use the Whimsy tool to simplify reviewing our giant monthly agenda with 80+ reports to review. So here goes!

Second Wednesday of the Month: Check Shepherd Reports

Board meetings are scheduled the third Wednesday of every month, so my director work starts about a week before the meeting. Some projects have already put their reports into the agenda, and special orders are often (but not always) available then. So I open up my web browser and head over to Whimsy’s board agenda tool:

https://whimsy.apache.org/board/agenda/ (Note: Private to ASF Members only, sorry!)

After I login, this brings up the agenda homepage, which is a listing of every item in the monthly board agenda in order. The agenda list includes items like any corporate board agenda, like a Roll Call, officer reports, and discussion items. It also includes any Special Orders (resolutions the board is considering), and importantly a list of 70+ Apache PMC project quarterly reports. The ASF publishes all board meeting minutes after approval; check them out.

Whimsy color-codes each Agenda item, so I know if it’s been posted to the agenda yet, or if there are any issues raised on that item or comments from other directors.

How I Review My Assigned Shepherd Reports

I’m the Shepherd for my randomly assigned 1/9th of all reports, just like each of the 9 directors, so I first want to review reports I’m the primary watcher for. Whimsy makes this easy – I just select Navigation – Queue from the top menubar. That takes me to a detail page that lists only the agenda items I’m assigned as Shepherd for.

Here, I have a list of ‘my’ reports for the month, with detailed status for each: has the PMC chair submitted it yet, are there comments or issues with any of them, etc. So I read through the list, and for each report:

  • If it’s red – not submitted yet – then I head over to that PMC’s private@ mailing list.
    • If the PMC has already drafted a report – either on private@ or dev – I skip it, trusting that they’ll submit the report to the agenda soon.
    • If I can’t find evidence the PMC is writing their report, I press the blue (send email) button on the page. That opens up my mail client to email the PMC Chair, the private@ list, and the board list with a preformatted email reminding them to submit reports on time.
  • If the report is yellow, it’s in the agenda. Click on the PMC name open the agenda page with their report.
    • There are a lot of factors in reviewing reports; I tend to focus on the community aspects first, and making sure the project has provided a picture of their activity.
    • Most PMC reports are fine, so I’d press the blue (approve) button at the bottom of the report. That saves my ‘preapproval’ locally, ready to commit later on to the agenda on the server.
    • Sometimes, things in the report aren’t clear, or I have additional questions. In that case, I press the blue (add comment) button, and enter a short question in the dialog. That gets saved to commit later.
    • At this point, the top navbar has a little red ‘1’ box – showing that I have local approvals or comments to commit. I’ll do that later.
  • I’m ready for my next Shepherd report, so I look at the bottom navbarand click the  (next) button with the name of my next Shepherd report (in agenda order) Whimsy knows which projects are assigned to me this month! I press that button (or just hit right arrow).
  • That opens up the next project report for me to review – read it and approve or comment.
  • Eventually, I’ve read the 8-10 reports I’m assigned as Shepherd for the month. When I press next/right arrow now, it brings me back to the Shepherd queue overview page again!

I’ve probably sent one or two emails for late reports, approved the rest of my Shepherd reports, and entered a comment or question on one. Each of the actions I’ve queued up is listed here on the Shepherd queue page. Now I just click (commit) at the bottom of the page. That opens a dialog with a preformatted checkin comment – press Commit, and I’m done. Whimsy automatically ensures that all the actions I’ve taken are committed to the right place in the agenda. I’m done for today.

That Weekend: I Review the Whole Agenda

By the weekend, most reports are submitted in the agenda, so I find time to tackle the whole agenda. Open up the agenda tool again, and I instinctively press (refresh) at the bottom – mostly superstition: Whimsy usually keeps everything updated behind the scenes magically.

Now, I need to review all the items that are actually submitted. That’s easy: select Navigation – Queue. This brings me to a similar page like the Shepherd queue, except it includes all the reports that are actually in the agenda by now.

I’ll spend an hour reading reports one by one (sometimes I start from Z and work backward – the Queue works both ways). When my eyes get tired from reading reports and following links, it’s time to take a break – now I hit the red ‘xx’ marker in the top navbar telling me how many reports I’ve marked. That brings up a page where I can (commit), and checkin all the work I’ve done. Time for a break.

If there are any important questions that come up at this point, I’ll often put a comment on the agenda, and then use my email client to email the question to the PMC directly. Sometimes, a PMC will write back quickly enough that we can get the question answered in the report itself before the board meeting. Sometimes, there will be a serious issue with a PMC’s report – in addition to adding a comment, I’ll also press the (flag) button. This marks the report for special discussion during the meeting.

I’ll also review all the Special Orders (formal resolutions proposed) and Discussion Items (topics for the board to discuss new proposals) to see if anything needs questions asked before the meeting.

Monday or Tuesday – Before The Meeting

By now, all reports should be submitted, so I’ll come back to the agenda tool and Navigation – Queue. This will walk me through just the remaining reports that are submitted and I haven’t yet preapproved. I’ll spend the time to read and review all the rest, and then commit my work here again.

Now it’s time to look at the whole agenda again, and see if there are any remaining issues still:

  • Anything red is not submitted yet. Double-check that some Director has contacted the PMC or officer to ask them to submit (either ASAP, or plan to submit for next month).
  • Anything in dark yellow has a ‘flag’, meaning at least one director has serious questions or comments on the report. I’ll read that director’s comments, and briefly review the report again, so I’m prepared for the meeting.
  • What about other director’s comments? I’m curious if someone else spotted smaller issues, so I’ll click on Navigate – Comments. This brings me to a listing page, by report, of all the comments everyone has entered about reports. Once you read all the comments (and can reply by adding your own comments on any item), you can also ‘Mark Read’ all current comments.
  • I sometimes remember that I have Action Items assigned from the previous month – so at this point, I’ll click on that section of the Agenda. Directors with Action Items can click on the Status: right in the page to get an edit box to type in status or notes – and then commit the changes.

Wednesday – Board Meeting Day!

I always make sure to look for last-minute traffic on the board@ mailing list, and to login to our IRC backchannel well before the meeting starts. That helps answer a lot of minor questions or changes that might have been made. I’ll also make sure the headset for my phone is setup for dialing into the conference line.

Once the meeting starts, everyone will have the agenda tool open in a web browser – sometimes two copies. I also often have a list archive browser page open, in case questions come up about mailing list traffic in a specific project.

I usually follow along hitting Next or right arrow to view the current item or report that we’re discussing on the call. This shows all the information – the report, any comments, any comments or mail discussions from past months – on each project or officer report page, so it’s a great resource.

When it comes time for special orders, we get to see even more Whimsy magic. While votes on board resolutions are by voice – the Secretary asks us each to vote yes/no on the call – the Secretary is recording the vote live in the Whimsy tool. Right after the last director votes ‘yes!’, your local copy of the agenda tool will update with a new Minutes section at the bottom of the resolution, noting it passed unanimously.

After The Board Meeting

After the meeting, the Chairman sends out a brief report to committers, and the Secretary uses Whimsy to send all director comments to the relevant PMC’s private@ mailing lists. In the next day after the board meeting is also the time for directors with Action Items to follow up on them.

Often if a serious question about a report comes up during the meeting, a director (by default, the Shepherd for that report) will be assigned an Action Item for it. The action is to ensure the PMC sees the board’s feedback, and follow up to ensure that an answer or reply gets to board@ before the next meeting.

Being able to do everything in the Whimsy agenda tool is a lifesaver for reviewing the agenda. Everything is always kept up to date, and I can do everything quickly directly from a single web browser. Even better, in the hectic hour before the meeting, Whimsy is smart enough to resolve most checkin conflicts (if someone else is doing preapprovals at the same time!) for you.

While the code is customized to the ASF and our specific board agenda format, the Apache Whimsy project is almost at the point that we could provide the software to other organizations if there was a need. Experience with Ruby, Rails, Sinatra, httpd, and Subversion or Git is required to run the server.

The post How Apache Directors Run ASF Board Meetings appeared first on Community Over Code.

Shawn McKinney: py-fortress Jump Start

$
0
0

py-fortress is a new open source library implementing RBAC standard features on the Python3 platform.  It requires an LDAP server, and these instructions provide help using docker images to save some time.  This post walks through LDAP setup, and some basic testing to verify working order.  More info in the README.

Prerequisites

Minimum hardware requirements

  • 1 Core
  • 1 GB RAM

Minimum software requirements

  • Linux machine
  • git installed
  • docker engine installed
  • Python3 and virtualenv (venv)

Start using ApacheDS or OpenLDAP Docker Image

1. Pull the docker image (pick one):

a. apacheds

docker pull apachedirectory/apacheds-for-apache-fortress-tests

b. openldap

docker pull apachedirectory/openldap-for-apache-fortress-tests

2. Run the docker container (pick one):

a. apacheds

export CONTAINER_ID=$(docker run -d -P apachedirectory/apacheds-for-apache-fortress-tests)
export CONTAINER_PORT=$(docker inspect --format='{{(index (index .NetworkSettings.Ports "10389/tcp") 0).HostPort}}' $CONTAINER_ID)
echo $CONTAINER_PORT

b. openldap

export CONTAINER_ID=$(docker run -d -P apachedirectory/openldap-for-apache-fortress-tests)
export CONTAINER_PORT=$(docker inspect --format='{{(index (index .NetworkSettings.Ports "389/tcp") 0).HostPort}}' $CONTAINER_ID)
echo $CONTAINER_PORT
  • Make note of the port, it’s needed later.
  • Depending on your docker setup may need to run as root or sudo priv’s.

Setup Python Runtime and Configure py-fortress Usage

1. Clone py-fortress

git clone https://github.com/shawnmckinney/py-fortress.git

2. Now edit config file:

vi test/py-fortress-cfg.json

3. Set the LDAP Port

...
"ldap": {
...
"port": 32768,
...
  • Use the port value obtained earlier.

4. Update the connection parameters (pick one):

a. apacheds:

"dn": "uid=admin,ou=system",

b. openldap:

"dn": "cn=Manager,dc=example,dc=com",

5. Set the structure in DIT:

...
"dit": {
    "suffix": "dc=example,dc=com",
    "users": "People",
    "roles": "Roles",
    "perms": "Perms"
},
...
  • If in doubt use the defaults.

6. Save and exit

7. Prepare your terminal for execution of python3. From the main dir of the git repo:

pyvenv env
. env/bin/activate
pip3 install ldap3
export PYTHONPATH=$(pwd)
cd test

8. Run the bootstrap pgm that creates the LDAP node structure, i.e. the DIT

python3 test_dit_dao.py
  • Locations for these nodes are set in the config file.

Integration Tests

These tests verify the setup worked correctly and will output to standard out but should not report errors.

1. Run the admin mgr tests:

python3 test_admin_mgr.py

add, assign, grant entities and relationships.  Run a second time to test the teardown apis, e.g. delete, revoke.

2. Run the access mgr tests:

python3 test_access_mgr.py

Tests create session, check access, etc.

3. Run the review mgr tests:

python3 test_review_mgr.py

test_review_mgr does find, search, etc.  Any of these tests may be run multiple times, except the bootstrap.  After finished testing you may reset the data simply by removing image and starting up a new one.  The readme has some help there.

END

Next up, using the py-fortress Command Line Interpreter

Olivier Lamy: Messieurs les contrôleurs de la SNCF vous faites un beau métier

$
0
0
Pour nos vacances en France, nous avons choisi de passer une semaine en Bretagne (cela tombe à point c'est la canicule à Paris).
En fait non, il n'y a rien d'improvisé et les billets ont été réservés et PAYES il y a près de deux mois par internet (ce point est important dans la suite du post).
Le train part donc de Paris Montparnasse à 10:04 ce mardi 30 Juin 2015. Depuis le début de nos vacances, nous avions prévu de rendre notre voiture de location puis de prendre notre train pour nous rendre dans la belle et fraîche Bretagne.
Bonne idée n'est-ce pas? Oui nous le pensions mais c'était sans compter sur quelques petits détails que nous avions oubliés...
Nous étions jusqu'ici en banlieue Essonne sud. Nous pensions mettre environ 1H30 pour nous rendre à la gare. Grosse Erreur!! Cela nous a pris 2H.
Donc nous finissons le trajet en catastrophe, regardant constamment notre montre, énervés, faisons face à la grande incivilité (je dirais même l'égoïsme) des français au volant.... Un ensemble de sentiments que nous ne connaissions plus.
Donc arrivée à 10h et le train est dans 4 minutes!!
Le retour de la voiture chez le loueur s'effectue dans la plus grande des cohue possible.
Là la course commence. Petit rappel sur le contexte, nous sommes une famille avec 4 enfants avec un petit de 3 ans (qui ne marche pas tout le temps donc nous avons une poussette), deux autres filles de 7 et 12 ans et un garçon de 14 ans. Donc oui nous sommes chargés et nous n'avons à ce moment là plus que 3 minutes pour aller du parking Avis jusqu'au quai.
Les enfants comprennent la situation et se chargent de valises (pour une petite de 7 ans une valise cela peut-être très lourd mais elle nous aide bien).
Là avec ma poussette et mes deux valises à tirer, je comprends la difficulté des handicapés dans des lieux publics!!! Mais tant bien que mal nous y arrivons, montons dans le wagon de queue au moment de la sonnerie.
Un grand merci à ces passagers qui nous ont aidés à monter nos valises, poussettes et sacs. En plus il fait bien chaud en ce jour de canicule!!
Donc nous sommes dans le train. Enfin presque car le train se sépare à Rennes et nous allons devoir remonter les dix wagons pour nous mettre le plus près possible de la locomotive afin de changer de train en moins de 4 minutes. Encore de grand moments de sueur nous attendent!!
Dans toute cette course pas eu le temps de retirer les billets DEJA payés il y a plus de 2 mois. (oui je sais j'insiste un peu sur ce point) Donc un de mes premières préocupations est de trouver un contrôleur pour lui expliquer que bloqués dans les bouchons nous n'avons pas eu le temps de retirer nos billets mais que j'ai bien le numéro de dossier etc...
Sa réponse froide et sur un ton très ironique voire moqueur: "ne vous inquiétez pas monsieur nous allons bien nous occuper de votre cas". Je vous rappelle lecteurs que nous venons de courir dans la canicule parisienne chargés de sacs, valises et autres poussettes et que monsieur se permet de faire un humour très ironique....
Très naïevement, je pense un très court instant que ce contrôleur est en fait très sympathique et va arranger notre souci.
Donc nous commençons notre remontée de nos dix wagons. je vous assure que remonter dix wagons avec une poussette et les bagages pour une famille de 6 personnes ce n'est vraiment pas simple surtout lorsque le train est chargé et bon nombre de personne ne se donne même pas la peine de déplacer même légèrement les bagages qu'ils laissent au sol à moitié dans le chemin (oui c'est un peu dur de mettre son sac en hauteur pour ne pas gêner les autres...)
A mi chemin de notre remontée, nous croisons les contrôleurs (ceux que nous avions déjà prévenus à notre montée dans le train) qui nous demandent: "titre de transport s'il vous plait".
Donc je tente de discuter en expliquant de nouveau notre cas, donne mon numéro de dossier pour vérification (mais apparemment en 2015 les contrôleurs n'auraient pas de moyens nécessaires pour vérifier les billets associés à mon numéro de dossier). Je n'ai apparemment pas bien saisi la nuance entre e-billet et billets à retirer.
Soit mais mon billet je l'ai déjà payé et nous avons simplement eu de la malchance à cause des bouchons.
Le contrôleur m'explique qu'en fait je peux frauder et vouloir me faire rembourser mon billet mais quand même prendre le train!!!
J'avoue que pour un père de famille de 4 enfants c'est toujours un peu dur de se faire traiter de voleur devant ses enfants.
Je lui montre donc ma commande avec le mention "non échangeable, non remboursable". Donc là franchement, je ne vois pas comment je pourrais faire cela.
Mais non ces messieurs se montrent intransigeants et nous mettent 5 amendes de 122 euros.
Là j'avoue ne pas comprendre. Nos billets ont été payés et réservés plus de 2 mois à l'avance.
Je me calme, j'essaie de faire comprendre à mes enfants que non nous ne sommes pas des voleurs. Qu'il s'agit simplement de malchance.
Nous parvenons enfin à regagner la locomotive de tête. Et oui il nous reste encore à changer de train en moins de 4 minutes. Le tout en transférant une poussette et des bagages pour une famille de 6 et sans oublier que c'est la canicule en France...
Finalement nous y parvenons....
Je ne comprends toujours pas aujourd'hui comment ces contrôleurs ont pu nous mettre de telles amendes. Le prétexte de la non possibilité de vérifier l'état de notre dossier me semble un peu gros. Nous sommes tout de même en 2015 dans un pays civilisé doté de matériel technologique souvent de pointe.
Donc oui messieurs les contrôleurs je trouve que vous faites un bien beau métier en mettant des amendes à une famille de 4 enfants (qui a déjà payé ses billets!!). La cible est évidemment bien facile, il y a tellement d'autres endroits en France mais les cibles sont peut-être plus compliquées et demandent un peu plus de courage....

Bryan Pendleton: Fabiano!

$
0
0

Well here's something that I was somewhat wondering if I'd live long enough to see: An American Will Play For The World Chess Championship

For the first time since Bobby Fischer captivated the country, a U.S. grandmaster has a shot at becoming the undisputed world chess champion.1 Fabiano Caruana, the current world No. 3 and the top American chess grandmaster, won the right today to play for the game’s most coveted prize. He’ll face the reigning world champion, Magnus Carlsen of Norway, in a 12-game, one-on-one match in London in November. It won’t be easy. Carlsen, the current world No. 1, has been champion since 2013 and became a grandmaster when he was 13 years old. He most recently defended his title in 2016 in New York City.

And, for a slightly more chess-oriented bit of coverage: Caruana Wins FIDE Candidates' Tournament

Fabiano Caruana won the 2018 FIDE Candidates' Tournament in Berlin convincingly. He defeated Alexander Grischuk in the final round with the black pieces. Sergey Karjakin blundered but held the draw vs Ding Liren, and both Kramnik-Mamedyarov and Aronian-So were also drawn.

Caruana will face Magnus Carlsen for the world chess championship in London in November.

Now I just have to wait 6 months.

At least I have 56 wonderful games to play through, to keep me busy until then.

By the way, Caruana's result is clearly the most impressive aspect of the tournament, and there's no way to understate 5 wins from 14 games in a field of this strength.

But don't overlook the amazing performance of 25-year-old Chinese superstar Ding Liren, who managed to play all 14 games without a single loss, and ended up coming in 4th, just 1.5 points behind Caruana. Absolutely phenomenal!


Shawn McKinney: Using the py-fortress Command Line Interpreter

$
0
0

The Command Line Interpreter (CLI) drives the admin and review APIs,  allowing ad-hoc RBAC setup and interrogation.  More info in the README.

Prerequisites

Completed the setup described: py-fortress Jumpstart

Getting Started

The command syntax:

python3 cli.py entityoperation --arg1 --arg2 ... 

Where entity is (pick one):

The operation is (pick one):

  • add
  • mod
  • del
  • assign
  • deassign
  • grant
  • revoke
  • read
  • search

The arguments are two dashes ‘- -‘ plus the attribute name and value pair, with a space between them.

--attribute_name someattributevalue

if an attribute value contains white space,  enclose in single ‘ ‘ or double tics ” “.

--attribute_name 'some value' --attribute_name2 "still more values"

For example, a perm grant:

$ python3 cli.py perm grant --obj_name myobj --op_name add --role 'my role'

This command invokes Python’s runtime with the program name, cli.py, followed by an entity type, operation name and multiple name-value pairs.

More Tips:

  • user and perm entities require the –role arg on assign, deassign, grant, and revoke operations.
  • These commands map directly to the admin and review APIs.
  • The description of the commands, including required arguments, can be inferred via the api doc inline to the admin_mgr and review_mgr modules.
  • The program output echos the inputted arguments and the results.

admin mgr

a. user add

$ python3 cli.py user add --uid chorowitz --password 'secret' --description 'added with py-fortress cli'
uid=chorowitz
description=added with py-fortress cli
user add
success

b. user mod

$ python3 cli.py user mod --uid chorowitz --l my location --ou my-ou --department_number 123
uid=chorowitz
department_number=123
l=my location
ou=my-ou
user mod
success

c. user del

$ python3 cli.py user del --uid chorowitz
uid=chorowitz
user del
success

d. user assign

$ python3 cli.py user assign --uid chorowitz --role account-mgr
uid=chorowitz
role name=account-mgr
user assign
success

e. user deassign

$ python3 cli.py user deassign --uid chorowitz --role account-mgr
uid=chorowitz
role name=account-mgr
user deassign
success

f. role add

$ python3 cli.py role add --name account-mgr
name=account-mgr
role add
success

g. role mod

$ python3 cli.py role mod --name account-mgr --description 'this desc is optional'
description=cli test role
name=account-mgr
role mod
success

h. role del

$ python3 cli.py role del --name account-mgr
name=account-mgr
role del
success

i. object add

$ python3 cli.py object add --obj_name page456
obj_name=page456
object add
success

j. object mod

$ python3 cli.py object mod --obj_name page456 --description 'optional arg' --ou 'another optional arg'
obj_name=page456
ou=another optional arg
description=optional arg
object mod
success

k. object del

$ python3 cli.py object del --obj_name page789
obj_name=page789
object del
success

l. perm add

$ python3 cli.py perm add --obj_name page456 --op_name read
obj_name=page456
op_name=read
perm add
success

m. perm mod

$ python3 cli.py perm mod --obj_name page456 --op_name read --description 'useful for human readable perm name'
obj_name=page456
op_name=read
description=useful for human readable perm name
perm mod
success

n. perm del

$ python3 cli.py perm del --obj_name page456 --op_name search
obj_name=page456
op_name=search
perm del
success

o. perm grant

$ python3 cli.py perm grant --obj_name page456 --op_name update --role account-mgr
obj_name=page456
op_name=update
role name=account-mgr
perm grant
success

p. perm revoke

$ python3 cli.py perm revoke --obj_name page456 --op_name update --role account-mgr
obj_name=page456
op_name=update
role name=account-mgr
perm revoke
success

review mgr

a. user read

$ python3 cli.py user read --uid chorowitz
 uid=chorowitz
 user read
 chorowitz
 uid: chorowitz
 dn: uid=chorowitz,ou=People,dc=example,dc=com 
 roles: ['account-mgr'] 
 ...
 *************** chorowitz *******************
 success

b. user search

 $ python3 cli.py user search --uid c
 uid=c
 user search
 c*:0
 uid: canders
 dn: uid=canders,ou=People,dc=example,dc=com
 roles: ['csr', 'tester'] 
 ...
 *************** c*:0 *******************
 c*:1
 uid: cedwards
 dn: uid=cedwards,ou=People,dc=example,dc=com
 roles: ['manager', 'trainer'] 
 ...
 
 *************** c*:1 *******************
 c*:2
 uid: chandler
 dn: uid=chandler,ou=People,dc=example,dc=com
 roles: ['auditor'] 
 ...
 *************** c*:2 *******************
 c*:3
 uid: chorowitz
 dn: uid=chorowitz,ou=People,dc=example,dc=com
 roles: ['account-mgr'] 
 ...
 *************** c*:3 ******************* 
 success

c. role read

 $ python3 cli.py role read --name account-mgr
 name=account-mgr
 role read
 account-mgr
 dn: cn=account-mgr,ou=Roles,dc=example,dc=com
 props: 
 members: ['uid=cli-user2,ou=People,dc=example,dc=com', 'uid=chorowitz,ou=People,dc=example,dc=com']
 internal_id: 5c189235-41b5-4e59-9d80-dfd64d16372c
 name: account-mgr
 constraint: <model.constraint.Constraint object at 0x7fc250bd9e10>
 description: 
 Role Constraint:
 raw: account-mgr$0$$$$$$$
 end_date: 
 end_lock_date: 
 timeout: 0
 begin_time: 
 end_time: 
 name: account-mgr
 day_mask: 
 begin_date: 
 begin_lock_date: 
 *************** account-mgr *******************
 success

d. role search

 $ python3 cli.py role search --name py-
 name=py-
 role search
 py-*:0
 dn: cn=py-role-0,ou=Roles,dc=example,dc=com
 description: py-role-0 Role
 constraint: <model.constraint.Constraint object at 0x7f17e8745f60>
 members: ['uid=py-user-0,ou=People,dc=example,dc=com', 'uid=py-user-1,ou=People,dc=example,dc=com', ... ]
 internal_id: 04b82ce3-974b-4ff5-ad21-b19ecca57722
 name: py-role-0
 *************** py-*:0 *******************
 py-*:1
 dn: cn=py-role-1,ou=Roles,dc=example,dc=com
 description: py-role-1 Role
 constraint: <model.constraint.Constraint object at 0x7f17e8733128>
 members: ['uid=py-user-8,ou=People,dc=example,dc=com', 'uid=py-user-9,ou=People,dc=example,dc=com']
 internal_id: 70524da8-3be6-4372-a606-d8175e2ca63b
 name: py-role-1 
 *************** py-*:1 *******************
 py-*:2
 dn: cn=py-role-2,ou=Roles,dc=example,dc=com
 description: py-role-2 Role
 constraint: <model.constraint.Constraint object at 0x7f17e87332b0>
 members: ['uid=py-user-3,ou=People,dc=example,dc=com', 'uid=py-user-5,ou=People,dc=example,dc=com', 'uid=py-user-7,ou=People,dc=example,dc=com']
 internal_id: d1b9da70-9302-46c3-b21b-0fc45b863155
 name: py-role-2
 *************** py-*:2 *******************
 ...
 success

e. object read

 $ python3 cli.py object read --obj_name page456
 obj_name=page456
 object read
 page456
 description: optional arg
 dn: ftObjNm=page456,ou=Perms,dc=example,dc=com
 internal_id: 1635cb3b-d5e2-4fcb-b61a-b8e91437e536
 props: 
 obj_name: page456
 ou: another optional arg
 type: 
 success

f. object search

 $ python3 cli.py object search --obj_name page
 obj_name=page
 object search
 page*:0
 props: 
 obj_name: page456
 description: optional arg
 dn: ftObjNm=page456,ou=Perms,dc=example,dc=com
 ou: another optional arg
 type: 
 internal_id: 1635cb3b-d5e2-4fcb-b61a-b8e91437e536
 page*:1
 props: 
 obj_name: page123
 description: optional arg
 dn: ftObjNm=page123,ou=Perms,dc=example,dc=com
 ou: another optional arg
 type: 
 internal_id: a823ef98-7be4-4f49-a805-83bfef5a0dfb
 success

g. perm read

 $ python3 cli.py perm read --obj_name page456 --op_name read
 op_name=read
 obj_name=page456
 perm read
 page456.read
 internal_id: 0dc55181-968e-4c60-8755-e20fa1ce017d
 dn: ftOpNm=read,ftObjNm=page456,ou=Perms,dc=example,dc=com
 abstract_name: page456.read
 type: 
 roles: 
 description: useful for human readable perm name
 props: 
 obj_name: page456
 obj_id: 
 op_name: read
 users: 
 success

h. perm search

$ python3 cli.py perm search --obj_name page
 obj_name=page
 perm search
 page*.*:0
 props: 
 roles: 
 abstract_name: page456.read
 obj_id: 
 users: 
 op_name: read
 internal_id: 0dc55181-968e-4c60-8755-e20fa1ce017d
 obj_name: page456
 type: 
 dn: ftOpNm=read,ftObjNm=page456,ou=Perms,dc=example,dc=com
 description: useful for human readable perm name
 page*.*:1
 props: 
 roles: ['account-mgr']
 abstract_name: page456.update
 obj_id: 
 users: 
 op_name: update
 internal_id: 626bca86-014b-4186-83a6-a583e39868a1
 obj_name: page456
 type: 
 dn: ftOpNm=update,ftObjNm=page456,ou=Perms,dc=example,dc=com
 description: 
 page*.*:2
 props: 
 roles: ['account-mgr']
 abstract_name: page456.delete
 obj_id: 
 users: 
 op_name: delete
 internal_id: 6c2fa5fc-d7c3-4e85-ba7f-5e514ca4263f
 obj_name: page456
 type: 
 dn: ftOpNm=delete,ftObjNm=page456,ou=Perms,dc=example,dc=com
 description: 
 success

i. perm search (by role)

 $ python3 cli.py perm search --role account-mgr
 perm search
 account-mgr:0
 description: 
 abstract_name: page456.update
 obj_id: 
 obj_name: page456
 users: 
 op_name: update
 type: 
 props: 
 roles: ['account-mgr']
 dn: ftOpNm=update,ftObjNm=page456,ou=Perms,dc=example,dc=com
 internal_id: 626bca86-014b-4186-83a6-a583e39868a1
 account-mgr:1
 description: 
 abstract_name: page456.delete
 obj_id: 
 obj_name: page456
 users: 
 op_name: delete
 type: 
 props: 
 roles: ['account-mgr']
 dn: ftOpNm=delete,ftObjNm=page456,ou=Perms,dc=example,dc=com
 internal_id: 6c2fa5fc-d7c3-4e85-ba7f-5e514ca4263f
 success

j. perm search (by user)

 $ python3 cli.py perm search --uid chorowitz
 perm search
 chorowitz:0
 type: 
 description: 
 dn: ftOpNm=update,ftObjNm=page456,ou=Perms,dc=example,dc=com
 obj_id: 
 users: 
 internal_id: 626bca86-014b-4186-83a6-a583e39868a1
 roles: ['account-mgr']
 abstract_name: page456.update
 props: 
 obj_name: page456
 op_name: update
 chorowitz:1
 type: 
 description: 
 dn: ftOpNm=delete,ftObjNm=page456,ou=Perms,dc=example,dc=com
 obj_id: 
 users: 
 internal_id: 6c2fa5fc-d7c3-4e85-ba7f-5e514ca4263f
 roles: ['account-mgr']
 abstract_name: page456.delete
 props: 
 obj_name: page456
 op_name: delete
 success

END

Next up, programming with py-fortress

 

Justin Mason: Links for 2018-03-27

$
0
0

Jaikiran Pai: Ant 1.10.3 released with JUnit 5 support

$
0
0
We just released 1.9.11 and 1.10.3 versions of Ant today. The downloads are available on the Ant project's download page. Both these releases are mainly bug fix releases, especially the 1.9.11 version. The 1.10.3 release is an important one for a couple of reasons. The previous 1.10.2 release, unintentionally introduced a bunch of changes which caused regressions in various places in Ant tasks. These have now been reverted or fixed in this new 1.10.3 version.

In addition to these fixes, this 1.10.3 version of Ant introduces a new junitlauncher task. A while back, the JUnit team has released JUnit 5.x version. This version is a major change from previous JUnit 3.x & 4.x versions, both in terms of how tests are written and how they are executed. JUnit 5 introduces a separation between test launching and test identification and execution. What that means is, for build tools like Ant, there's now a clear API exposed by JUnit 5 which is solely meant to deal with how tests are launched. Imagine something along the lines of "launch test execution for classes within this directory". Although Ant's junit task already supported such construct, the way we used to launch those tests was very specific to Ant's own implementation and was getting more and more complex. With the introduction of this new API within the JUnit 5 library, it's much more easier and consistent now to launch these tests.

JUnit 5, further introduces the concept of test engines. Test engines are responsible for "identifying" which classes are actually tests and what semantics to apply to those tests. JUnit 5 by default comes with a "vintage" engine which identifies and runs JUnit 4.x style tests and a "jupiter" engine which identifies and runs JUnit 5.x API based tests.

The "junitlauncher" task in Ant introduces a way to let the build specify which classes to choose for test launching. The goal of this task is to just launch the test execution and let the JUnit 5 framework identify and run the tests. The current implementation shipped in Ant 1.10.3, is the basic minimal for this task. We plan to add more features as we go along and as we get feedback on it. Especially, this new task doesn't currently support executing these tasks in a separate forked JVM, but we do plan to add that in a subsequent release.

The junit task which has been shipped in Ant since long time back, will continue to exist and can be used for executing JUnit 3.x or JUnit 4.x tests. However, for JUnit 5 support, the junitlauncher task is what will be supported in Ant.

More details about this new task can be found in the junitlauncher's task manual. Please give it a try and report any bugs or feedback to our user mailing list.

Edward J. Yoon: Belief Propagation 알고리즘

$
0
0
한 때, 병렬 그래프 연산과 알고리즘에 심취한 적이 있다. 그중에 신뢰 전파(Belief Propagation) 알고리즘에 대해 적어본다. 이 알고리즘은 Graphical model에서 inference problem의 근사해를 추정하는 기법으로, Graph 상에 관측된 특정한 확률변수의 분포가 주어졌을 때, 직간접적으로 영향 받는 모든 관측되지 않은 확률 변수의  Marginal distribution을 추정하는 기법이다.

즉, 아래와 같은 Poly Tree 구조 안에 C 노드와 E 노드가 Evidence가 주어졌을 때, \( P(B|C,E) = ? \)



알고리즘은 각 노드 간 Down or Upward Message Passing과 Data Fusion 단계를 거쳐 각 노드의 확률 분포를 추정하기 때문에 병렬 연산을 위해서 Bulk Synchronous Parallel 모델이 대단히 적합하다 할 수 있다.

Google의 DistBelief는 이러한 분산 베이지안 네트워크의 메시지 전달 방식이라 지어진게 아닌가 싶다.


Justin Mason: Links for 2018-03-28

$
0
0

Piergiorgio Lucidi: Becoming a member of the Apache Software Foundation

$
0
0

Last week, after a tricky period passed traveling for my daily consultancy activities at TAI Solutions, I had to visit different cities to join meetings with some colleagues and customers. 

I started my journey on Tuesday going to Florence, then I went in Pisa on Wednesday and in Naples on Thursday for finally returning in Rome the same day. I was literally shocked when after dinner I saw a new email dropped in my mailbox titled Invitation to join The Apache Software Foundation Membership. 

I can't describe how I feel honored and grateful for this huge nomination. This award means a lot for me considering that my contributions were well accepted and what I done for the Foundation in the last year was completely shared, voted and accepted, it was enough for me :)

I'm very happy to be part of the ASF and I hope that my ongoing and future contributions will help everyone, not only for solving their problems but also for trying to involve more people inside the Foundation. We can do the difference together for the public good.

I would like to thank all the ASF members that made this possible for the nomination and then for voting me, this is absolutely awesome! 

Contribution path at ASF

If you are wondering what it means becoming an Apache Member, you find a very short description of a typical contribution path that anyone can follow at ASF:

  1. User: you typically start using one of the ASF projects
  2. Contributor: you send patches for code or docs / you support users in the official channels (mailing list, IRC, etc...). You can't access directly to any resources provided by the ASF.
  3. Committer: if you contribute in a constant way, you can be invited by the Project Management Committee (PMC) to become a Committer. This will give you direct access for submitting code, documentation or updating the website.
  4. PMC Member: if you put a lot of effort in the project, you can be invited to become a PMC Member. Now you can participate and decide together with other PMC Members the direction to follow for this project: you can vote for every decision and release and now your vote counts!
  5. Project Chair: a PMC Member will be nominated to be the official interface with the ASF Board for leading the project. If you are a Project Chair you also are an Apache Member.
  6. Apache Member: a Committer or a PMC Member that is taking care of ASF investing effort for the Foundation itself or he is contributing in more than one project. The nomination and the vote process is done by other ASF Members. Each member is legally a shareholder of the entire foundation and he can vote for the ASF Board.

For more informations about all the responsabilities behind each role, please visit the How it works page in the ASF website.

19th anniversary of "The Apache Way"

I was also so happy that in the same week I was nominated as an Apache Member, the Foundation is celebrating its 19th anniversary! This is a special moment for the ASF as confirmed by the recent quote from Merv Adrian (Gartner):

"The Apache Software Foundation’s extraordinary contribution to the economic refactoring of software stacks seems to be gaining more momentum with every passing year," wrote Merv Adrian, Analyst and Research Vice President at Gartner. "...the role of the ASF remains so important: by providing a vehicle for developers to work 'in the open,' while keeping the playing field level in many respects, the ASF has enabled the rapid development and pervasive spread of key layers that everyone benefits from." https://blogs.gartner.com/merv-adrian/?p=1213

ASF-19thBirthday.jpeg

Justin Mason: Links for 2018-03-30

$
0
0

Shawn McKinney: Testing the py-fortress RBAC0 System

$
0
0

The Command Line Interpreter (CLI) may be used to drive the RBAC System APIs,  to test, verify and understand a particular RBAC policy.

This document also resides here: README-CLI-AUTH

Prerequisites

Getting Started

The syntax for testing py-fortress system commands:

python3 cli_test_auth.py operation --arg1 --arg2 ... 

The operation is (pick one):

  •  auth : maps to access_mgr.create_session
  • check : maps to access_mgr.check_access
  • roles : maps to access_mgr.session_roles
  • perms : maps to access_mgr.session_perms
  • add : maps to access_mgr.add_active_role
  • del : maps to access_mgr.drop_active_role
  • show : displays contents of session to stdout

Where functions are described in the source: access_mgr.py

The args are ‘–‘ + names contained within these py-fortress entities

  • user.py– e.g. –uid, –password
  • perm.py – e.g. –obj_name, –op_name

Command Usage Tips

  • The description of the commands, i.e. required and optional arguments, can be inferred via the api doc inline to the access_mgr module.
  • This program ‘pickles’ (serializes) the RBAC session to a file called sess.pickle, and places in the executable folder.  This simulates an RBAC runtime to test these commands.
  • Call the auth operation first, subsequent ops will use and refresh the session.
  • Constraints on user and roles are enforced. For example, if user has timeout constraint of 30 (minutes), and the delay between ops for existing session exceeds, it will be deactivated.

___________________________________________________________________________________

Setup Test Data With admin_mgr

To setup RBAC test data, we’ll be using another utility, cli.py, that was introduced here: README-CLI-AUTH.md.  Once we’ve got test data, we can move to the next section which invokes the RBAC system commands, via the cli_test_auth.py program.

1. user add– chorowitz

 (env)~py-fortress/test$ python3 cli.py user add --uid chorowitz --password 'secret' --timeout 30 --begin_date 20180101 --end_date none --day_mask 1234567 --description 'for testing only'
 uid=chorowitz
 description=for testing only
 end_date=none
 begin_date=20180101
 day_mask=1234567
 timeout=30
 name=chorowitz
 user add
 success

2. role add– account-mgr

 (env)~py-fortress/test$ python3 cli.py role add --name 'account-mgr' --timeout 30 --begin_date 20180101 --end_date none --day_mask 1234567
 name=account-mgr
 end_date=none
 begin_date=20180101
 day_mask=1234567
 timeout=5
 role add
 success

3. role add– auditor

 (env)~py-fortress/test$ python3 cli.py role add --name 'auditor' --timeout 5 --begin_date 20180101 --end_date none --day_mask 1234567
 name=auditor
 end_date=none
 begin_date=20180101
 day_mask=1234567
 timeout=5
 role add
 success

4. user assign– chorowitz  to role account-mgr

 (env)~py-fortress/test$ python3 cli.py user assign --uid 'chorowitz' --role 'account-mgr'
 uid=chorowitz
 role name=account-mgr
 user assign
 success

5. user assign– chorowitz to role auditor

 (env)~py-fortress/test$ python3 cli.py user assign --uid 'chorowitz' --role 'auditor'
 uid=chorowitz
 role name=auditor
 user assign
 success

6. object add– page456

 (env)~py-fortress/test$ python3 cli.py object add --obj_name page456
 obj_name=page456
 object add
 success

7. perm add– page456.read

 (env)~py-fortress/test$ python3 cli.py perm add --obj_name page456 --op_name read
 obj_name=page456
 op_name=read
 perm add
 success

8. perm add– page456.edit

 (env)~py-fortress/test$ python3 cli.py perm add --obj_name page456 --op_name edit
 obj_name=page456
 op_name=edit
 perm add
 success

9. perm add– page456.remove

 (env)~py-fortress/test$ python3 cli.py perm add --obj_name page456 --op_name remove
 obj_name=page456
 op_name=remove
 perm add
 success

10. perm grant– page456.edit to role account-mgr

 (env)~py-fortress/test$ python3 cli.py perm grant --obj_name page456 --op_name edit --role account-mgr
 obj_name=page456
 op_name=edit
 role name=account-mgr
 perm grant
 success

11. perm grant– page456.remove to role account-mgr

 (env)~py-fortress/test$ python3 cli.py perm grant --obj_name page456 --op_name remove --role account-mgr
 obj_name=page456
 op_name=remove
 role name=account-mgr
 perm grant
 success

12. perm grant– page456.read  to role auditor

 (env)~py-fortress/test$ python3 cli.py perm grant --obj_name page456 --op_name read --role auditor
 obj_name=page456
 op_name=read
 role name=auditor
 perm grant
 success

________________________________________________________________________________

Perform cli_test_auth.pyaccess_mgr Commands

1. auth– access_mgr.create_session – authenticate, activate roles:

 (env)~py-fortress/test$ python3 cli_test_auth.py auth --uid 'chorowitz' --password 'secret'
 uid=chorowitz
 auth
 success

Now the session has been pickled in on file system in current directory.

2. show– output user session contents to stdout:

 (env)~py-fortress/test$ python3 cli_test_auth.py show
 show
 session
 warnings: None
 session_id: None
 error_id: None
 expiration_seconds: None
 user: <model.user.User object at 0x7fdfb2745208>
 grace_logins: None
 message: None
 timeout: None
 is_authenticated: True
 last_access: <util.current_date_time.CurrentDateTime object at 0x7fdfb2743e10>
 user
 department_number: 
 l: 
 role_constraints: [<model.constraint.Constraint object at 0x7fdfb2745320>, <model.constraint.Constraint object at 0x7fdfb2745470>]
 postal_code: 
 title: 
 constraint: <model.constraint.Constraint object at 0x7fdfb2745550>
 reset: []
 phones: 
 locked_time: []
 emails: 
 cn: chorowitz
 ou: 
 physical_delivery_office_name: 
 roles: ['account-mgr', 'auditor']
 pw_policy: 
 room_number: 
 mobiles: 
 description: for testing only
 uid: chorowitz
 system: []
 internal_id: 4a7a68ae-d0c3-4328-98dc-e7f64739ed67
 employee_type: 
 sn: chorowitz
 props: 
 dn: uid=chorowitz,ou=People,dc=example,dc=com
 display_name: 
 User Constraint:
 raw: 
 User-Role Constraint[1]:
 day_mask: 1234567
 begin_time: 
 name: account-mgr
 end_lock_date: 
 begin_lock_date: 
 begin_date: 20180101
 end_time: 
 timeout: 30
 raw: account-mgr$30$$$20180101$none$$$1234567
 end_date: none
 User-Role Constraint[2]:
 day_mask: 1234567
 begin_time: 
 name: auditor
 end_lock_date: 
 begin_lock_date: 
 begin_date: 20180101
 end_time: 
 timeout: 5
 raw: auditor$5$$$20180101$none$$$1234567
 end_date: none
 *************** user ******************* 
 success

Displays the contents of session to stdout.

3. check– access_mgr.check_access – perm page456.read:

 (env)~py-fortress/test$ python3 cli_test_auth.py check --obj_name page456 --op_name read
 op_name=read
 obj_name=page456
 check
 success

The user has auditor activated so unless timeout validation failed this will succeed.

4. check– access_mgr.check_access – perm page456.edit:

 (env)~py-fortress/test$ python3 cli_test_auth.py check --obj_name page456 --op_name edit
 op_name=edit
 obj_name=page456
 check
 success

The user has account-mgr activated so unless timeout validation failed this will succeed.

5. check– access_mgr.check_access – perm page456.remove:

 (env)~py-fortress/test$ python3 cli_test_auth.py check --obj_name page456 --op_name remove
 op_name=remove
 obj_name=page456
 check
 success

The user has account-mgr activated so unless timeout validation failed this will succeed.

6. get– access_mgr.session_perms:

 (env)~py-fortress/test$ $ python3 cli_test_auth.py get
 get
 page456.read:0
 description: 
 abstract_name: page456.read
 obj_id: 
 props: 
 type: 
 roles: ['auditor']
 users: 
 dn: ftOpNm=read,ftObjNm=page456,ou=Perms,dc=example,dc=com
 internal_id: d6887434-050c-48d8-85b0-7c803c9fcf07
 obj_name: page456
 op_name: read
 page456.edit:1
 description: 
 abstract_name: page456.edit
 obj_id: 
 props: 
 type: 
 roles: ['account-mgr']
 users: 
 dn: ftOpNm=edit,ftObjNm=page456,ou=Perms,dc=example,dc=com
 internal_id: 02189535-4b39-4058-8daf-af0e09b0d235
 obj_name: page456
 op_name: edit
 page456.remove:2
 description: 
 abstract_name: page456.remove
 obj_id: 
 props: 
 type: 
 roles: ['account-mgr']
 users: 
 dn: ftOpNm=remove,ftObjNm=page456,ou=Perms,dc=example,dc=com
 internal_id: 10dea5d1-ff1d-4c3d-90c8-edeb4c7bb05b
 obj_name: page456
 op_name: remove
 success

Display all perms allowed for activated roles to stdout.

7. del– access_mgr.drop_active_role – auditor:

 (env) smckinn@ubuntu:~python3 cli_test_auth.py del --role auditor
 del
 success

RBAC distinguishes between assigned and activated roles.

8. roles– access_mgr.session_roles

 (env)~py-fortress/test$ python3 cli_test_auth.py roles
 roles
 account-mgr:0
 begin_time: 
 raw: account-mgr$30$$$20180101$none$$$1234567
 begin_lock_date: 
 end_date: none
 name: account-mgr
 end_time: 
 timeout: 30
 day_mask: 1234567
 begin_date: 20180101
 end_lock_date: 
 success

Notice the audit role is no longer active.

9. check– access_mgr.check_access – perm page456.read (again):

 (env)~py-fortress/test$ python3 cli_test_auth.py check --obj_name page456 --op_name read
 op_name=read
 obj_name=page456
 check
 failed

The auditor role was deactivated so even though it’s assigned, user cannot perform as auditor.

10. add– access_mgr.add_active_role – auditor:

 (env)~py-fortress/test$ python3 cli_test_auth.py add --role auditor
 op_name=read
 obj_name=page456
 check
 success

Now the user should be allowed to resume audit activities.

11. roles– access_mgr.session_roles:

 (env)~py-fortress/test$ python3 cli_test_auth.py roles
 roles
 account-mgr:0
 begin_time: 
 raw: account-mgr$30$$$20180101$none$$$1234567
 begin_lock_date: 
 end_date: none
 name: account-mgr
 end_time: 
 timeout: 30
 day_mask: 1234567
 begin_date: 20180101
 end_lock_date: 
 auditor:1
 end_date: none
 day_mask: 1234567
 raw: auditor$5$$$20180101$none$$$1234567
 begin_date: 20180101
 end_lock_date: 
 timeout: 5
 begin_time: 
 name: auditor
 end_time: 
 begin_lock_date: success
 success

Notice the audit role has been activated once again.

12. check– access_mgr.check_access – perm page456.read (for the 3rd time):

 (env)~py-fortress/test$ python3 cli_test_auth.py check --obj_name page456 --op_name read
 op_name=read
 obj_name=page456
 check
 success

The auditor role activated once again so user can do auditor things again.

13. Wait 5 minutes before performing the next step.

Allow enough time for auditor role timeout to occur before moving to the next step. Now, if you run the roles command, the auditor role will once again be missing.  This behavior is controlled by the ‘timeout’ attribute on either a user or role constraint.

14. check– access_mgr.check_access – perm page456.read:

 (env)~py-fortress/test$ python3 cli_test_auth.py check --obj_name page456 --op_name read
 op_name=read
 obj_name=page456
 check
 failed

Because the auditor role has timeout constraint set to 5 (minutes), it was deactivated automatically from the session.

END

Edward J. Yoon: Storm vs. Spark Streaming: 내부 메커니즘의 차이점

$
0
0
트렌드 면에서는 기계학습에 밀린 것 같지만, 실시간 처리 또는 스트리밍 처리는 대단히 중요한 기술이다. 오픈소스로 존재하는 Storm과 Spark을 한번 비교해보자. 물론 솔루션 선택은 자유이고, 상황에 맞는 것이 바로 최적의 솔루션이다.

1. Task Prallel vs. Data Parallel

계산 프로그래밍 모델에서부터 느낄 수 있는 가장 뚜렷한 차이는 먼저, Storm은 Task Parallelism이고 Spark Streaming은 Data Parallelism이다.  개인적으로는 보다 완성도 높은 시스템을 구현할 수 있는 Storm의 계산 그래프 구조를 선호하지만 코드의 복잡도 면에서는 Spark이 유리할 수 있다.

2. Streaming vs. Micro-batches

Spark streaming의 경우는 엄밀히 말하면 실시간 처리보다는 작은 단위의 배치 연속이라고 할 수 있다. Storm은 전형적인 event-driven record at a time processing model로써 살아있는 데이터를 처리한다고 하면, Spark은 휴면 상태로 넘어간 데이터를 처리한다. 때문에 Latency에서 차이가 발생하는데 Storm은 subsecond, Spark의 경우에는 사용자 임의지정 few seconds가 된다.

여기에서 중대한 차이점이 발생하는데, Storm의 경우에는 데이터 유실이 없어야 할 때 (no data loss), Spark은 중복 연산이 없어야할 때 (exactly once) 선택하는 것이 좋다.

한편, Storm Trident는 Micro-batches 스타일도 가능하다고 한다.



3. Stateless vs. Stateful

앞서 소개한대로 Storm은 매 레코드 별로 처리하기 때문에 State을 유지하지 않기 때문에 장애 복구 메커니즘이 Spark보다 복잡하고 re-launching 시간이 더 걸릴 것이다.  Stateless vs. Stateful는 (뭔가 더 있을 것 같지만) 장애 복구 외의 차이점은 전혀 없다.

4. Integration with Batch processing

긴말 필요없이 당연히 Spark이 유리하겠다.

여기까지 알아봤고 과연 무엇이 최상의 솔루션인가? 내 생각엔 Storm이 조금 우수하다 판단한다. 물론 최적의 솔루션은 당신 상황에 맞아야 하겠다.

Steve Loughran: Computer Architecture and Software Security

$
0
0

Gobi's End
There's a new paper covering another speculativelation-based attack on system secrets, BranchScope.

This one relies on the fact that for branch prediction to be effective, two bits are generally allocated to it, strongly & weakly taken and strongly & weakly not taken. The prediction state of a branch is based on the value in BranchHistoryTable[hash(address)]) and used to choose the speculation; if it was wrong it is moved from strongly -> weakly, and from weakly to opposite. Similarly, in weakly taken/non taken, if the prediction was taken, then its moves to strong.

Why so complex? Because we loop all the time
for (int i = 0; i ^lt; 1000) {
doSomething(i);
}
Which probably gets translated into some assembly code (random CPU language I just made up)

    MOV  r1, 0
L1: CMP r1, 999
JGT end
JSR DoSomething
ADD r1, 1
JMP L1
... continue

For 1000 times in that loop. the branch is taken, then once, at the end of the loop, it's not taken. The first time it's encountered, the CPU won't know what to do, it will just guess one of them and have a 50% chance of being wrong (see below). After that first iteration though
If that loop is itself called repeatedly, the fact that final iteration was mispredicted shouldn't lose the fact that the rest of the loop was predicted repeatedly. Hence, two bits.

As Hennessey and Patterson write in Computer Architecture, a quantitive approach (v4, p89), "the importance of branch prediction has increased". With deeper pipelines and the mismatch of CPU speed and memory, guessing right matters.

There isn't enough space in the Branch History Table to store 2 bits of history for every single branch in a system, so instead there'll be some smaller table and some function to take the full address and map it to an offset in that table. According to [Pan92], 4096 to 8192 entries is not that far off "an infinite" table. All that's left is the transform from program counter to BHT entry, which for 32 bit aligned opcodes something as simple as (PC >> 4) & 8191.

But the table is not infinite, there will be clashes: if something else is using the same entry in the BHT, then your branch may be predicted according to its history.

The new attack then simply works out the taken/not taken state of the target branch by seeing how your own code, whose addresses are designed to conflict, is predicted. That's all. And given that ability to predict branch direction, using it to reach conclusions about the state of the system.

Along with caching, branch prediction is the key way in which modern CPUs speed things up. And it does. But it's the clash between your entries in the cache and BHT and that of the target routine which is leaking information: how long it takes to read things, whether a branch is predicted or not. The very act of speeding up code is what leaks secrets.

"Modern" CPU Microarchitecture is in trouble here. We've put decades of work into caching, speculation, branch prediction, and now they all turn out to expose information. We built for speed, at what turns out to be the cost of secrecy. And in cloud environments where you cannot stop malicious code running on the same CPU, that means your secrets are not safe.

What can we do?

Maybe another microcode patch is possible: when switching from usermode to OS mode then the BHT is flushed. But that will cripple performancve in any loop which invokes system code in it. Or you somehow isolate BHT entries for different virtual memory spaces. Probably the best long term, but I'll leave it to others to work out how to implement.

What's worrying is the fact that new exploits are appearing so soon after Meltdown and Spectre. Security experts are now looking at all of the speculative execution bits of modern CPUs and thinking "that's interesting..."; more exploits are inevitable. And again, systems, especially cloud infrastructures, will be left struggling to catch up.

Cloud infrastructures are probably going to have to pin every VM to a dedicated CPU, with the hypervisor on its own part. That will limit secret exfiltration to the VM OS and anything else running on the core (the paper looks at the intel SGX "secure" zone and showed how it can be targeted). It'll be the smaller VMs at risk here, and potentially containerized stuff: you'd want all containers on a single core to be "yours".

What about single-core systems running a mix of trusted and trusted code (your phone, your web browser)? That's going to be hard. You can't dedicate one x86 core per browser tab.

Longer term: we're going to have to go through every bit of modern CPU architecture from a security perspective and say "is this safe?" And no doubt conclude, any speedup mechanism which relies on the history of previous work is insecure, if that history includes the actions taken (or speculatively taken) by sensitive applications.

Which is bad news for the majority of today's high end CPUs, especially those ones trying to keep the x86 instruction set alive. Those are the parts which have had so much effort invested into getting fractional improvements in caching, branch prediction, speculation and pipeline efficiency, and so have gotten incredibly complex. That's where the big vulnerabilities live.

This may push us back towards "underperformant but highly parallel" massivley multicore systems. Little/no speculation, isolating user space code into their own processes.

The most recent example of this is/was the Sun Niagara CPU line, which started off with a pool of early-90s era SPARC CPUs without fancy branch prediction...intead they had 4 set of state to cover the entire execution state of four different threads, scheduling work between them. Memory access? Stall that thread, schedule another. Branch? Don't predict, just wait and see, and add other thread opcodes to the pipeline.

There's still going to be security issues there (cache shared across the many cores, the actions of one thread can be implicitly observed by others in their execution times). And it seemly does speculate memory loads if there was no other work to schedule.

What's equally interesting is that the system is so power efficient. Speculative execution and branch prediction (a) requires lots of gates, renamed registeres, branch history tables and the like —every missed prediction or branch is energy wasted. Compare that to an Itanium part, where you almost need to phone up your electricity supplier for permission to power one up.

The Niagara 2 part pushed it ahead further to a level that is impressive to read. At the same time, you can see a great engineering team struggling with a fab process behind what Intel could do, Sun trying to fight the x86 line, and, well, losing.

Where are the parts now? Oracle's M8 CPU PDF touts its Out Of Order execution —speculative execution—, and data/instruction prefetch. I fear it's now got the same weaknesses of everything else. Apparently the java 8 streams API gets bonus speedup, which reminds me to post something criticising Java checked execution for making that API unusable for the throws IOException Hadoop codebase. As for the virtualization support, again, you'd need to think about pinning to a CPU. There's also that $L1-$L3 cache hit/miss problem: something speculating in one CPU could evict cached data observable to others, unless speculative memory fetches weren't a feature of the part.

They look nice-but-pricey servers; if you are paying the Oracle RDBMs tax the all-in-one price might mitigate that. Overall though, with a focus on many fast-but-dim parts over a smaller number of "throw Si at maximum single thread" architecture of recent x86 designs may provide opportunities for future designs to be more resistant to attacks related to speculative execution. I also think I'd like to see their performance numbers running Apache Spark 2.3 with one executor per thread and lots of RAM.

[Photo: my 2008 Fizik Gobi saddle snapped one of its Titanium rails last week. Made it home in the rain, but a sign that after a decade, parts just wear out.]

Carlos Sanchez: Kubernetes Plugin for Jenkins 1.5

$
0
0

15 releases have gone by in 7 months since 1.0 last September

Some interesting new features since 1.0 and a lot of bugfixes and overall stability improvements. For instance now you can use yaml to define the Pod that will be used for your job:

def label = "mypod-${UUID.randomUUID().toString()}"
podTemplate(label: label, yaml: """
apiVersion: v1
kind: Pod
metadata:
  labels:
    some-label: some-label-value
spec:
  containers:
  - name: busybox
    image: busybox
    command:
    - cat
    tty: true
"""
) {
    node (label) {
      container('busybox') {
        sh "hostname"
      }
    }
}

 

You can use readFile step to load the yaml from a file in your git repo.

  • Allow creating Pod templates from yaml. This allows setting all possible fields in Kubernetes API using yaml JENKINS-50282 #275
  • Support passing kubeconfig file as credentials using secretFile credentials JENKINS-49817 #294

You can find the full changelog in GitHub.

Edward J. Yoon: 경희대 강연자료

$
0
0

학생들 대상이라 .. 내 인생은 이랬노라 전달해주며 도움될만한 그리고 조금 뜨거워지도록 자극받을 만한 내용을 적었다. SW를 10년 이상 했지만 할만 한건지는 나도 잘 모르겠음.
Viewing all 9364 articles
Browse latest View live


Latest Images