Quantcast
Channel: Planet Apache
Viewing all 9364 articles
Browse latest View live

Mukul Gandhi: Xerces bug XERCESJ-1687

$
0
0
I wish to share my anguish, that following Xerces bug has caused me:

https://issues.apache.org/jira/browse/XERCESJ-1687

The bug reporter is very right in his arguments. But somehow I've to say, the Xerces team cannot fix this bug right now. I've also been thinking to "resolve" this bug with a fix reason "later" (but my conscience doesn't allow that either).

I hope the situation will improve.

Bryan Pendleton: Up, up, and away

$
0
0

With the opening of the Salesforce Tower looming in the next few weeks, there's a flurry of media attention.

Here are two very interesting articles, with lots of links to chase:

  • Transbay Transformed
    As the blocks around the transit center fill up with towers, San Francisco is getting a crash course in what high-density urban living is all about.
  • San Francisco’s Skyline, Now Inescapably Transformed by Tech
    While few were looking, tech ate San Francisco, a development encouraged by Mayor Ed Lee, who unexpectedly died this month. There are now 79,129 high-tech jobs in the city, about triple the number a decade ago, according to a new research report from the real estate firm CBRE.

    If you work in an office in the city, there is a 28 percent chance you work in tech. That level is exceeded only by Seattle, where the sharp growth of Amazon pushed the percentage of tech workers up to 38 percent, and by Silicon Valley itself, where it is 42 percent.

    “San Francisco has gone from being driven by multitudes of industries in 2007 to being now focused largely on tech,” said Colin Yasukochi, a CBRE analyst. “The growth feeds on itself. Tech workers are attracted to the great opportunities in the city, and the supply of workers means more tech companies come here.”

And no, I'm not moving into the new building.

And yes, it really does look like all the new office floors will be the dreadfullymistakenawfullyhorribleopen seating arrangement.

Sigh.

Justin Mason: Links for 2018-01-01

$
0
0
  • Steven Bellovin on Bitcoin

    When you engineer a system for deployment you build it to meet certain real-world goals. You may find that there are tradeoffs, and that you can’t achieve all of your goals, but that’s normal; as I’ve remarked, “engineering is the art of picking the right trade-off in an overconstrained environment”. For any computer-based financial system, one crucial parameter is the transaction rate. For a system like Bitcoin, another goal had to be avoiding concentrations of power. And of course, there’s transaction privacy. There are less obvious factors, too. These days, “mining” for Bitcoins requires a lot of computations, which translates directly into electrical power consumption. One estimate is that the Bitcoin network uses up more electricity than many countries. There’s also the question of governance: who makes decisions about how the network should operate? It’s not a question that naturally occurs to most scientists and engineers, but production systems need some path for change. In all of these, Bitcoin has failed. The failures weren’t inevitable; there are solutions to these problems in the acdemic literature. But Bitcoin was deployed by enthusiasts who in essence let experimental code escape from a lab to the world, without thinking about the engineering issues—and now they’re stuck with it. Perhaps another, better cryptocurrency can displace it, but it’s always much harder to displace something that exists than to fill a vacuum.

    (tags: steven-bellovinbitcointechsoftwaresystemsengineeringdeploymentcryptocurrencycypherpunks)

Bertrand Delacretaz: Great software is like a great music teacher

$
0
0

This blog post of mine was initially published by Computerworld UK in 2010.

I’m amazed at how many so-called “enterprise software systems” do not embrace the Web model in 2010, making them way much harder and much less fun to use than they should be.

I have recently started making parallels between this and music teachers, and the analogy seems to work. Don’t ask where the parallel comes from…weird connections in my brain I guess.

Say you want to learn to play the guitar. Someone recommended Joe, who’s teaching in his downtown studio.

You get there almost on time. Traffic. You find Joe’s studio and here he is, dressed in a neat and simple casual outfit. Smiling at you.

Joe: Hey welcome! So you wanna learn to play?

You: Yes. I brought my guitar, got it from my uncle. It’s a bit worn out as you can see.

Joe: I see…well, you might want to get a better one if you continue past the first few lessons, but for now that will do! Do you have something that you would like to play to get started?

You: “Smoke on the water”, of course. The opening line.

Joe: Let’s try that then, I’ll show you! Just plug your guitar in this amplifier, and let me setup some nice effects so you get a cool sound.

Joe plays the first few bars a few times, shows you how that works and you give it a try. Ten minutes later you start sounding half-decent and you’re having loads of fun playing together with Joe.

Joe: Okay, you’re doing good! I’ll show you my rough course plan so you know what’s up next. I’m quite flexible when it comes to the curriculum – as long as you’re having fun and progressing we’ll be fine.

It’s easy to imagine the bad teacher version of this story:

  • Unwelcoming
  • Complains because you’re three minutes late.
  • Wears a boring old-fashioned suit, and not willing to let you play that crappy old guitar.
  • Boring you with tons of scales before you can start playing a song.
  • Not giving you an overview of what comes next.
  • Not ready to compromise on His Mighty Standard Teaching Program.
  • Making you feel stupid about how bad a player you are.

Bad software is like that bad teacher:

  • Hard to get started with.
  • Requires tons of specific client software of just the right version.
  • Requires you to enter loads of useless information before doing anything useful or fun.
  • Not willing to let you explore and do your own mistakes, and making sure you feel stupid when mistakes occur.

The Web model is the way to go, of course.

  • Ubiquitous access.
  • Welcoming to various types of client software.
  • Easy to point to by way of permanent URLs.
  • Doing its best (fail whales anyone?) to keep you informed and avoid making you feel stupid when something goes wrong.
  • Letting you explore its universe with simple web-based navigation, and rewarding your efforts with new discoveries.

This is 2010, and this is the Web. Don’t let any useless software stand between you and the information and services that you need.


Bertrand Delacretaz: Would you hire an open source developer?

$
0
0

This blog post of mine was initially published by Computerworld UK in 2010.

As open source comes of age and becomes mainstream, more and more job postings include “open source skills” in their requirements.

But do you really want to hire someone who spends their time exchanging flames with members of their own community in public forums? Someone who greets newcomers with “I have forwarded your question to /dev/null, thanks” and other RTFM answers?

Luckily, open source communities are not just about being rude and unwelcoming to strangers. Most of them are not like that at all, and the skills you learn in an open source community can make a big difference in a corporate environment as well.

One very important skill that you learn or improve in an open source community is to express yourself clearly in written form. The mailing lists or forums that we use are very limited compared to in-person communications, and extra care is required to get your message through. Being concise and complete, disagreeing respectfully, avoiding personal attacks and coping with what you perceive as personal attacks are all extremely useful skills on the job. Useful skills for your whole life actually.

Once you master asynchronous written discussions as a way to build group consensus, doing the same in a face to face meeting can be much easier. But the basic skills are the same, so what you learn in an open source community definitely helps.

Travel improves the mind, and although being active in open source can help one travel more, even without traveling you’ll be exposed to people from different cultures, different opinions, people who communicate in their second or third language, and that helps “improve your mind” by making you more tolerant and understanding of people who think differently.

Not to mention people who perceive what you say in a different way than you expected – this happens all the time in our communities, due in part to the weak communications channels that we have to use. So you learn to be extra careful with jokes and sneaky comments, which might work when combined with the right body language, but can cause big misunderstandings on our mailing lists. Like when you travel to places with a different culture.

Resilience to criticism and self-confidence is also something that you’ll often develop in an open source community. Even if not rude, criticism in public can hurt your ego at first. After a while you just get used to it, take care of fixing your actual mistakes if any, and start ignoring unwarranted negative comments. You learn to avoid feeding the troll, as we say. Once your work starts to produce useful results that are visible to the whole community, you don’t really care if someone thinks you’re not doing a good job.

The technical benefits or working in open source communities are also extremely valuable. Being exposed to the work and way of thinking of many extremely bright developers, and quite a few geniuses, definitely helps you raise the bar on what you consider good software. I remember how my listening skills improved when I attended a full-time music school for one year in my youth: just listening to great teachers and fellow students play made me unconsciously raise the bar on what I consider good music.

Open source communities, by exposing you to good and clever software, can have the same effect. And being exposed to people who are much better than you at certain things (which is bound to happen for anybody in an open source project) also helps make you more humble and realistic about your strengths and weaknesses. Like in soccer, the team is most efficient when all players are very clear about their own and other players’ strengths and weaknesses.

You’ll know to whom you should pass or not pass the ball in a given situation.

To summarise, actively participating in a balanced open source community will make you a better communicator, a more resilient and self-confident person, improve your technical skills and make you humbler and more realistic about your strengths and weaknesses.


Jim Jagielski: The Path to Apache OpenOffice 4.2.0

$
0
0

It is no secret that, for awhile at least, Apache OpenOffice had lost its groove.

Partly it was due to external issues. Mostly that the project and the committers were spending a lot of their time and energies battling and correcting the FUD associated around the project. Nary a week would go by without the common refrain "OpenOffice is Dead. Kill it already!" and constant (clueless) rehashes of the history between OpenOffice and LibreOffice. With all that, it is easy and understandable to see why morale within the AOO community would have been low. Which would then reflect and affect development on the project itself.

So more so than anything, what the project needed was a good ol' shot of adrenaline in the arm and some encouragement to keep the flame alive. Over the last few months this has succeeded beyond our dreams. After an admittedly way-too-long period, we finally released AOO 4.1.4. And we are actively working on not only a 4.1.5 release but also preparing plans for our 4.2.0 release.

And it's there that you can help.

Part of what AOO really wants to be is a simple, easy-to-user, streamlined office suite for the largest population of people possible. This includes supporting old and long deprecated OSs. For example, our goal is to continue to support Apple OSX 10.7 (Lion) with our 4.2.0 release. However, there is one platform which we are simply unsure about what to do, and how to handle it. And what makes it even more interesting is that it's our reference build system for AOO 4.1.x: CentOS5

Starting with AOO 4.2.0, we are defaulting to GIO instead of Gnome VFS. The problem is that CentOS5 doesn't support GIO, which means that if we continue with CentOS5 as our reference build platform for our community builds, then all Linux users who use and depend on those community builds will be "stuck" with Gnome VFS instead of GIO. If instead we start using CentOS6 as our community build server. we leave CentOS5 users in a lurch (NOTE: CentOS5 users would still be able to build AOO 4.2.0 on their own, it's just that the binaries that the AOO project supplies won't work). So we are looking at 3 options:

  1. We stick w/ CentOS5 as our ref build system for 4.2.0 but force Gnome VFS.
  2. We move to CentOS6, accept the default of GIO but understand that this moves CentOS5 as a non-supported OS for our community builds.
  3. Just as we offer Linux 32 and 64bit builds, starting w/ 4.2.0 we offer CentOS5 community builds (w/ Gnome VFS) IN ADDITION TO CentOS6 builds (w/ GIO). (i.e.: 32bit-Gnome VFS, 64bit-Gnome VFS, 32bit-GIO, 64bit-GIO).

Which one makes the most sense? Join the conversation and the discussion on the AOO dev mailing list!

Steve Loughran: Speculation

$
0
0

Speculative execution has been intel's strategy for keeping the x86 architecture alive since the P6/Pentium Pro part shipped in '95.

I remember coding explicitly for the P6 in a project in 1997; HPLabs was working with HP's IC Division to build their first CMOS-camera IC, which was an interesting problem. Suddenly your IC design needs to worry about light, aligning the optical colour filter with the sensors, making sure it all worked.

Eyeris

I ended up writing the code to capture the raw data at full frame rate, streaming to HDD, with an option to alternatively render it with/without the colour filtering (algorithms from another bit HPL team). Which means I get to nod knowingly when people complain about "raw" data. Yes, it's different for every device precisely because its raw.

The data rates of the VGA-resolution sensor via the PCI boards used to pull this off meant that a both cores of a multiprocessor P6 box were needed. It was the first time I'd ever had a dual socket system, but both sockets were full with the 150MHz parts and with careful work we could get away with the "full link rate" data capture which was a core part of the qualification process. It's not enough to self test the chips any more see, you need to look at the pictures.

Without too many crises, everything came together, which is why I have a framed but slightly skewed IC part to hand. And it's why I have memories of writing multithreaded windows C++ code with some of the core logic in x86 assembler. I also have memories of ripping out that ASM code as it turned out that it was broken, doing it as C pointer code and having it be just as fast. That's because: C code compiled to x86 by a good compiler, executed on a great CPU, is at least performant as hand-written x86 code by someone who isn't any good at assembler, and can be made to be correct more easily by the selfsame developer.

150 MHz may be a number people laugh at today, but the CPU:RAM clock ratios weren't as bad as they are today: cache misses are less expensive in terms of pipeline stalls, and those parts were fast. Why? Speculative and out of order execution, amongst other things
  1. The P6 could provisionally guess which way a branch was going to go, speculatively executing that path until it became clear whether or not the guess was correct -and then commit/abort that speculative code path.
  2. It uses a branch predictor to make that guess on the direction a branch was taken, based on the history of previous attempts, and a default option (FWIW, this is why I tend to place the most likely outcome first in my if() statements; tradition and superstition).
  3. It could execute operations out of order. That is, it's predecessor, the P5, was the last time mainstream intel desktop/server parts executed x86 code in the order the compiler generated them, or the human wrote them.
  4. register renaming meant that even though the parts had a limited set of registers, those OOO operations could reuse the same EAX, EBX, ECX registers without problems.
  5. It had caching to deal with the speed mismatch between that 150 MHz CPU & RAM.
  6. It supported dual CPU desktops, and I believe quad-CPU servers too. They'd be called "dual core" and "quad core" these days and looked down at.

Being the first multicore system I'd ever used, it was a learning experience. First was learning how too much windows NT4 code was still not stable in such a world. NTFS crashes with all all volumes corrupted? check. GDI rendering triggering kernel crash? check. And on a 4-core system I got hold of, everything crashed more often. Lesson: if you want a thread safe OS, give your kernel developers as many cores as you can.

OOO forced me to learn about the x86 memory model itself: barrier opcodes, when things could get reordered and when they wouldn't. Summary: don't try and be clever about synchronization, as your assumptions are invalid.

Speculation is always an unsatisfactory solution though. Every mis-speculation is lost cycles. And on a phone or laptop, that's wasted energy as much as time. And failed reads could fill up the cache with things you didn't want. I've tried to remember if I ever tried to use speculation to preload stuff if present, but doubt it. The CMOV command was a non-branching conditional assignment which was better, even if you had to hand code it.  The PIII/SSE added the PREFETCH opcode so you could a non-faulting hinted prefetch which you could stick into your non-branching code, but that was a niche opcode for people writing games/media codecs &c. And as Linus points out, what was clever for one CPU model turns out to be a stupid idea a generation later. (arguably, that applies to Itanium/IA-64, though as it didn't speculate, it doesn't suffer from the Spectre & Meltdown attacks).

Speculation, then: a wonderful use of transistors to compensate for how we developers write so many if() statements in our code. Wonderful, it kept the x86 line alive and so helped Intel deliver shareholder value and keep the RISC CPU out of the desktop, workstation and server businesses. Terrible because :"transistors" is another word for "CPU die area" with its yield equations and opportunity cost, and also for "wasted energy on failed speculations". If we wrote code which had fewer branches in it, and that got compiled down to CMOV opcodes, life would be better. But we have so many layers of indirection these days; so many indirect references to resolve before those memory accesses. Things are probably getting worse now, not better.

This week's speculation-side-channel attacks are fascinating then. These are very much architectural issues about speculation and branch prediction in general, rather than implementation details. Any CPU manufacturer whose parts do speculative execution has to be worried here, even if there's no evidence that your shipping parts aren't vulnerable to the current set of attacks. The whole point about speculation is to speed up operation based on the state of data held in registers or memory, so the time-to-execute is always going to be a side-channel providing information about the data used to make a branch.


The fact that you can get at kernel memory, even from code running under a hypervisor, means, well, a lot. It means that VMs running in cloud infrastructure could get at the data of the host OS and/or those of other VMs running on the same host (those S3 SSE-C keys you passed up to your VM? 0wned, along with your current set of IAM role credentials). It potentially means that someone else's code could be playing games with branch prediction to determine what codepaths your code is taking. Which, in public cloud infrastructure is pretty serious, as the only way to stop people running their code alongside yours is currently to pay for the top of the line VMs and hope they get a dedicated part. I'm not even sure that dedicated cores in a multicore CPU are sufficient isolation, not for anything related to cache-side-channel attacks (they should be good for branch prediction, I think, if the attacker can't manipulate the branch predictor of the other cores).

I can imagine the emails between cloud providers and CPU vendors being fairly strained, with the OEM/ODM teams on the CC: list. Even if the patches being rolled out mitigate things, if the slowdown on switching to kernelspace is as expensive as hinted, then that slows down applications, which means that the cost of running the same job in-cloud just got more expensive. Big cloud customers will be talking to their infrastructure suppliers on this, and then negotiating discounts for the extra CPU hours, which is a discount the cloud providers will expected to recover when they next buy servers. I feel as sorry for the cloud CPU account teams as I do for the x86 architecture group.

Meanwhile, there's an interesting set of interview questions you could ask developers on this topic.
  1. What does the generated java assembly for the Ival++ on a java long look like?
  2. What if the long is marked as volatile?
  3. What does the generated x86 assembler for a Java Optional<AtomicLong> opt.map(AtomicLong::addAndGet(1)) look like?
  4. What guarantees do you get about reordering?
  5. How would you write code which attempted to defend against speculation timing attacks?

I don't have the confidence to answer 1-4 myself, but I could at least go into detail about what I believed to be the case for 1-3; for #4 I should do some revision.

As for #5, defending. I would love to see what others suggest. Conditional CMOV ops could help against branch-prediction attacks, by eliminating the branches. However, searching for references to CMOV and the JDK turns up some issues which imply that branch prediction can sometimes be faster...", including "JDK-8039104. Don't use Math.min/max intrinsic on x86" it may be that even CMOV gets speculated on; with the CPU prefetching what is moved and keeping the write uncommitted until the state of the condition is known.

I suspect that the next edition of Hennessy and Patterson, "Computer Architecture, a Quantitative Approach" will be covering this topic.I shall look forward to with even greater anticipation than I have had for all the previous, beloved, versions.

As for all those people out there panicking about this, worrying if their nearly-new laptop is utterly exposed? You are running with Flash enabled on a laptop you use in cafe wifis without a VPN and with the same password, "k1tten",  you use for gmail and paypal. You have other issues.

Justin Mason: Links for 2018-01-03

$
0
0

Jim Jagielski: My 2017-2018 Introspections

$
0
0

As the old year falls away and the new year boots up, it is traditional for people to write "old year retrospectives" as well as "new year predictions." Heck, I originally envisioned this entry as a duet of 2 separate blogs. But as much as I tried, it was just too difficult to keep them distinct and self-contained. There was simply too much overlap and as much as I expect "new things" in 2018, I think 2018 will mostly be a solidification of events and issues ramped up from 2017.

So with all that in mind, I present my 2017-2018 Introspections... in no particular order:

Continue reading "My 2017-2018 Introspections"

Bryan Pendleton: RowHammer strikes again

$
0
0

Before we get to the main event (just be patient), I want you to first spend a little time with something that I think is actually a much MORE interesting story about computer security: The strange story of “Extended Random”

Yesterday, David Benjamin posted a pretty esoteric note on the IETF’s TLS mailing list. At a superficial level, the post describes some seizure-inducingly boring flaws in older Canon printers. To most people that was a complete snooze. To me and some of my colleagues, however, it was like that scene in X-Files where Mulder and Scully finally learn that aliens are real.

Why is this such a great story?

  1. Well, for one thing, it's been going on for more than a decade. That's a long time.
  2. For another thing, the technology involved is quite complex: multiple software systems have to interact, in quite complex ways
  3. And for another thing, at least one part of the overall vulnerability involves simply including additional COMPLETELY RANDOM DATA in your messages over the network. How is adding some extra random data a vulnerability? (You'll have to read the article for yourself)
  4. But most importantly, as opposed to most computer security vulnerabilities, this isn't simply an implementation mistake made by some systems programmer; from everything we can determine, it is actually the result of deliberate sabotage by our own government, sabotage so subtle that, fifteen years later, the best cryptographic minds in the world are still picking through the details.

Anyway, enough of that.

I know what you came here for.

You want to hear what good old RowHammer has been up to over the last couple years, right?!

Well, unless you've been living in a cave (and who reads blogs if they live in a cave?), you know that what we're talking about here is Reading privileged memory with a side-channel, also known as: "the latest amazing work by the astonishing Google Project Zero team."

Well, anyway, here are the goods:

  • Reading privileged memory with a side-channel
    We have discovered that CPU data cache timing can be abused to efficiently leak information out of mis-speculated execution, leading to (at worst) arbitrary virtual memory read vulnerabilities across local security boundaries in various contexts.
  • Meltdown and Spectre
    These hardware bugs allow programs to steal data which is currently processed on the computer. While programs are typically not permitted to read data from other programs, a malicious program can exploit Meltdown and Spectre to get hold of secrets stored in the memory of other running programs.
  • Meltdown
    Meltdown allows an adversary who can run code on the vulnerable processor to obtain a dump of the entire kernel address space, including any mapped physical memory. The root cause of the simplicity and strength of Meltdown are side effects caused by out-of-order execution.
  • Spectre Attacks: Exploiting Speculative Execution
    in order to mount a Spectre attack, an attacker starts by locating a sequence of instructions within the process address space which when executed acts as a covert channel transmitter which leaks the victim’s memory or register contents. The attacker then tricks the CPU into speculatively and erroneously executing this instruction sequence, thereby leaking the victim’s information over the covert channel. Finally, the attacker retrieves the victim’s information over the covert channel. While the changes to the nominal CPU state resulting from this erroneous speculative execution are eventually reverted, changes to other microarchitectural parts of the CPU (such as cache contents) can survive nominal state reversion.
  • Mitigations landing for new class of timing attack
    Since this new class of attacks involves measuring precise time intervals, as a partial, short-term, mitigation we are disabling or reducing the precision of several time sources in Firefox. This includes both explicit sources, like performance.now(), and implicit sources that allow building high-resolution timers, viz., SharedArrayBuffer.
  • KASLR is Dead: Long Live KASLR
    In this paper, we present KAISER, a highly-efficient practical system for kernel address isolation, implemented on top of a regular Ubuntu Linux. KAISER uses a shadow address space paging structure to separate kernel space and user space. The lower half of the shadow address space is synchronized between both paging structures.
  • The mysterious case of the Linux Page Table Isolation patches
    Of particular interest with this patch set is that it touches a core, wholly fundamental pillar of the kernel (and its interface to userspace), and that it is obviously being rushed through with the greatest priority. When reading about memory management changes in Linux, usually the first reference to a change happens long before the change is ever merged, and usually after numerous rounds of review, rejection and flame war spanning many seasons and moon phases.

    The KAISER (now KPTI) series was merged in some time less than 3 months.

  • Quiet in the peanut gallery
    I wish there were some moral to finish with, but really the holidays are over, the mystery continues, and all that remains is a bad taste from all the flack I have received for daring intrude upon the sacred WordPress-powered tapestry of a global security embargo.
  • Re: Avoid speculative indirect calls in kernel
    I think somebody inside of Intel needs to really take a long hard look at their CPU's, and actually admit that they have issues instead of writing PR blurbs that say that everything works as designed.

    .. and that really means that all these mitigation patches should be written with "not all CPU's are crap" in mind.

    Or is Intel basically saying "we are committed to selling you shit forever and ever, and never fixing anything"?

  • Today's CPU vulnerability: what you need to know
    The Project Zero researcher, Jann Horn, demonstrated that malicious actors could take advantage of speculative execution to read system memory that should have been inaccessible.

It's pretty interesting stuff.

It will take a while to dig through and think about.

But, it's important to note: this is primarily an attack against large, shared servers, which typically run software on behalf of many unrelated parties on the same physical system, using techniques such as "virtualization", or "containers".

Think "cloud computing."

Those environments are the ones which are spending the most amount of time thinking about what these new findings mean.

Rohit Yadav: DEPLOYING PROJECT USING CAPISTRANO (CAPISTRANO IN RAILS)

$
0
0

We can Deploy project on Digital Ocean, AWS, Transip, Linode server ... using Capistrano

What is Capistrano:-

Capistrano is a remote server automation tool.

It supports the scripting and execution of arbitrary tasks, and includes a set of sane-default deployment workflows.

For that we have to follow simple steps :-

How to Install Capistrano

We can use Capistrano in RAILS 4, RAILS 5 as well as RAILS 2 and 3 .

If we use RAILS 4 and 5 , then we can follow these steps :-

1.  Gemfile:-

       group :development do
             gem "capistrano"
             gem 'net-ssh'
             gem 'capistrano-bundler'
             gem 'capistrano-rails'
             gem 'capistrano-rvm'
             gem 'capistrano-sidekiq'
       end


 

2.   Run the bundle

      cd /path/to/your/project

       bundle

3.   Prepare your Project for Capistrano 

       capify .

      This will create:

  •        Capfile in the root directory of your Rails app
  •        deploy.rb file in the config directory
  •        deploy directory in the config directory

4.   Replace the contents of your Capfile with the following:

      require "capistrano/setup"
      require "capistrano/deploy"
      require 'capistrano/bundler'
      require 'capistrano/rails'
      require 'capistrano/rvm'
      require 'capistrano/rails/migrations'
      require 'capistrano/sidekiq'
      require 'whenever/capistrano'
      require "capistrano/scm/git"
      set :rvm_type, :user
      set :rvm_ruby_version, '2.1.10'

      install_plugin Capistrano::SCM::Git
      Dir.glob("lib/capistrano/tasks/*.rake").each { |r| import r }




Note :-
     This Capfile loads some pre-defined tasks in to your Capistrano configuration   files to make your deployments hassle-free, such as automatically:
  •     Selecting the correct Ruby
  •     Pre-compiling Assets
  •     Cloning your Git repository to the correct location
  •     Installing new dependencies when your Gemfile has changed

5. Replace the contents of config/deploy.rb with the following, updating fields marked in green with your app and Droplet parameters:

   # config valid only for current version of Capistrano
    lock "3.7.2"

     set :application, "rohitproject"
     # set :scm, :git
     set :repo_url, "git@bitbucket.org:rohit/rohit.git"
     server '136.142.***.***', user: 'root', roles: %w{web app}, my_property: :my_value,     password: '********'
     set :deploy_to, '/var/deploy/rohitproject'


     # Default branch is :master
     # ask :branch, `git rev-parse --abbrev-ref HEAD`.chomp

     # Default value for :pty is false
     set :pty, true

     # Default value for linked_dirs is []
     append :linked_dirs, "log", "tmp/pids", "tmp/cache", "tmp/sockets", "public/system"

     # Default value for keep_releases is 5
     set :keep_releases, 5
     namespace :deploy do
           desc "start resque"
           task "resque:start" => :app do
                 run "cd #{current_path} && RAILS_ENV=#{environment} BACKGROUND=yes PIDFILE=#{shared_path}/pids/resque.pid QUEUE=* nohup bundle exec rake environment resque:work QUEUE='*' >> #{shared_path}/log/resque.out"
            end

            desc "stop resque"
            task "resque:stop" => :app do
                 run "kill -9 `cat #{shared_path}/pids/resque.pid`"
            end

           desc "ReStart resque"
           task "resque:restart" => :app do
                 Rake::Task['deploy:resque:stop'].invoke
                 Rake::Task['deploy:resque:start'].invoke
           end

           desc "start resque scheduler"
           task "resque:start_scheduler" => :app do
                 run "cd #{current_path} && RAILS_ENV=#{environment} DYNAMIC_SCHEDULE=true BACKGROUND=yes PIDFILE=#{shared_path}/pids/resque_scheduler.pid QUEUE=* nohup bundle exec     rake environment resque:scheduler >> #{shared_path}/log/resque_scheduler.out"
           end

            desc "stop resque scheduler"
            task "resque:stop_scheduler" => :app do
                  run "kill -9 `cat #{shared_path}/pids/resque_scheduler.pid`"
            end

            desc "ReStart resque scheduler"
            task "resque:restart" => :app do
                 Rake::Task['deploy:resque:stop_scheduler'].invoke
                 Rake::Task['deploy:resque:start_scheduler'].invoke
            end


           desc 'Restart application'
            task :restart do
                  on roles(:app), in: :sequence, wait: 5 do
                  execute :touch, release_path.join('tmp/restart.txt')
            end
            end

           after :publishing, :restart

           after :restart, :clear_cache do
           on roles(:web), in: :groups, limit: 3, wait: 10 do
                  execute :touch, 'sudo service nginx restart'
           end

          desc "Update crontab with whenever"
          task :update_cron do
                 on roles(:app) do
                        within current_path do
                             execute :bundle, :exec, "whenever --update-crontab #{fetch(:application)}"
                        end
                 end
           end
          after :finishing, 'deploy:update_cron'
    end

    end
   
 

 

6. Replace the contents of config/deploy/production.rb

  •      Uses production as the default environment for your Rails app
       role :app, "136.142.***.***"
       role :web, "136.142.***.***"
       role :db,  "136.142.***.***"

       server '136.142.***.***', user: 'root', roles: %w{web app}, my_property: :my_value


       

7.   Then you can run :- 

      cap production deploy

Note :- 
  • It will run automatically Sidekiq, Whenever, DelayedJobs, Assets precompile, bundle , migrations and everything .
  • If we use same setting then we don't need to run any command on the server for running any such kind of Sidekiq, rake assets:precopile etc.

We can follow the links below for the same :-

http://guides.beanstalkapp.com/deployments/deploy-with-capistrano.html
http://robmclarty.com/blog/how-to-deploy-a-rails-4-app-with-git-and-capistrano
http://www.lugolabs.com/articles/14-schedule-rails-tasks-with-whenever-and-capistrano

IF we use  RAILS 2 AND 3THEN WE WILL FOLLOW THESE BELOW STEPS :-

Gemfile :- 

group :development do
  gem "capistrano"
  gem 'net-ssh'
end
gem 'rails_12factor', group: :production

Run that commands :- 

gem install capistrano-ext
copify .

Capfile:-

load 'deploy'
load 'config/deploy' # remove this line to skip loading any of the default tasks

Deploy.rb :-

set :application, "pdm"
set :repository,  "git@bitbucket.org:rohit/rohityadav.git"
set :user, "root"
set :domain, "136.***.***.***"
set :use_sudo, false
set :scm, :git
set :branch, "develop"

role :web, "136.***.***.***"                         # Your HTTP server, Apache/etc
role :app, "136.***.***.***"                          # This may be the same as your `Web` server
role :db,  "136.***.***.***", :primary => true 
role :db,  "136.***.***.***"

server "136.***.***.***", :app, :web, :db, :primary => true
set :deploy_to, '/var/deploy/project'

require 'capistrano/ext/multistage'

set :stages, ["staging", "production"]
set :default_stage, "production"

Then run:-

cap production deploy

Justin Mason: Links for 2018-01-04

$
0
0

Justin Mason: Links for 2018-01-05

Bryan Pendleton: The Silk Roads: a very short review

$
0
0

Peter Frankopan's The Silk Roads: A New History of the World is an extremely ambitious book.

It sets out to survey, in a single 500 page volume, some 2000+ years of history of the region which, roughly speaking, spans from Turkey and Egypt to Mongolia and Pakistan in the one direction, and from Yemen to Russia in the other.

That's a lot of land, and a lot of time, to cover.

Certainly if you, like me, struggle to distinguish Basra from Bactria, Samarkand from Sanjan, Karakorum from Kashgar, Mosul from Mashad, Dushanbe from Dunhuang, or Istanbul from Isfahan (ok, well, that last one I knew), then you'll find a lot to learn in this history of human activity in Central Asia over the last few thousand years.

And it's certainly a colorful book, full of great stories of traders, adventurers, explorers, merchants, prophets, and their interactions.

(Attila the Hun! Genghis Khan! Richard Lionheart! The Black Death! Vasco da Gama! T.E. Lawrence! Timur! Marco Polo!)

It's an immense scope, though, and Frankopan can barely get going on one episode before he races on to the next, breathless and impatient, rather like the White Rabbit: always in a hurry, but not quite sure where he's going.

I didn't mind any of the minutes I spent with The Silk Roads, but in the end I'm afraid that this part of the world is still rather a blur to me, which is a shame, because I think that's precisely the problem that Frankopan set out to solve.

Would he have been more successful (with me, at least), had he confined himself to a smaller region, or a shorter time period, the better to have used those pages to spend more time inhabiting particular incidents and characters? I'm not sure. I'm not much of a reader of histories, so I suspect this problem is just endemic to the genre, and it really just means that while his book was fascinating, I'm not really the target audience.

Mukul Gandhi: XML validation. Some thoughts

$
0
0
I think, there are various people using XML who like having XML data without any validation. I'm a strong proponent of having validation nearly always when using XML. Comparing the situation with RDBMS data, would make this clear I think (I don't mind proving things about a technology, taking cues from another technology which is hugely popular). Do we ever use data in RDBMS tables, without the schema (we don't)? The same should apply to XML, since validation is very closely defined alongside XML (DTD at least, and then XSD). If DTD or XSD is provided along with XML parsing, by the XML toolkit of choice, then why shouldn't we use validation whenever we're using XML -- as a consequence, we're working with a better design?

Interestingly, validation doesn't always happen when using XML, because it hasn't been made mandatory in the XML language (like schemas with RDBMS). People using XML, sometimes like having XML data quickly transported between components or stored locally -- and they don't use validation in the process; which is fine since it meets the needs of an application.

Sometimes, people using XML are influenced by how JSON is used. Presently, JSON doesn't has a schema language (but I came to know, that this may change in the future), and JSON is very popular & useful for certain use cases. Therefore, people try to use XML the same way -- i.e without validation.

Steve Loughran: Trying to Meltdown in Java -failing. Probably

$
0
0
Meltdown has made for an "interesting" week in computing, as everyone is learning about/revising their knowledge of Speculative Execution. FWIW, I'd recommend the latest version of Patterson and Hennessey, Computer Architecture A Quantitative Approach. Not just for its details on speculative execution, but because it is the best book on microprocessor architecture and design that anyone has ever written, and lovely to read. I could read it repeatedly and not get bored.(And I see I need to get the 6th edition!)

Stokes Croft drugs find

This weekend, rather than read Patterson and Hennessey(*) I had a go to see if you could implement the meltdown attack in Java, hence in mapreduce, spark, or other non-native JAR

My initial attempt failed provided the part only speculates one branch in.

More specifically "the range checking Java does on all array accesses blocks the standard exploit given steve's assumptions". You can speculatively execute the out of bounds query, but you can't read the second array at an offset which will trigger $L1 cache loading.

If there's a way to do a reference to two separate memory locations which doesn't trigger branching range checks, then you stand a chance of pulling it off. I tried that using the ? : operator pair, something like

String ref = data ? refA : ref B;

which I hoped might compile down to something like


mov ref, refB
cmp data, 0
cmovnz ref, refB

This would do the move of the reference in the ongoing speculative branch, so, if "ref" was referenced in any way, trigger the resolution

In my experiment (2009 macbook pro with OSX Yosemite + latest java 8 early access release), a branch was generated ... but there are some refs in the open JDK JIRA to using CMOV, including the fact that hotspot compiler may be generating it if it things the probability of the move taking place is high enough.

Accordingly, I can't say "the hotspot compiler doesn't generate exploitable codepaths", only "in this experiment, the hotspot compiler didn't appear to generate an exploitable codepath".

Now the code is done, I might try on a Linux VM with Java 9 to see what is emitted
  1. If you can get the exploit in, then you'd have access to other bits of the memory space of the same JVM, irrespective of what the OS does. That means one thread with a set of Kerberos tickets could perhaps grab the secrets of another. IT'd be pretty hard, given the way the JVM manages objects on the heap: I wouldn't know where to begin, but it would become hypothetically possible.
  2. If you can get native code which you don't trust loaded into the JVM, then it can do whatever it wants. The original meltdown exploit is there. But native code running in JVM is going to have unrestricted access to the entire address space of the JVM -you don't need to use meltdown to grab secrets from the heap. All meltdown would do here is offer the possibility of grabbing kernel space data —which is what the OS patch does.

Anyway, I believe my first attempts failed within the context of this experiment.

Code-wise, this kept me busy on Sunday afternoon. I managed to twist my ankle quite badly on a broken paving stone on the way to patisserie on Saturday, so sat around for an hour drinking coffee in Stokes Croft, then limped home, with all forms of exercise crossed off the TODO list for the w/e. Time for a bit of Java coding instead, as a break for what I'd been doing over the holiday (C coding a version of Ping which outputs CSV data and a LaTeX paper on the S3A committers)

It took as much time trying get hold of the OS/X disassembler for generated code as it did coding the exploit. Why so? Oracle have replaced all links in Java.sun.net which would point to the reference dynamic library with a 302 to the base Java page telling you how lucky you are that Java is embedded in cars. Or you see a ref to on-stack-replacement on a page in Project Kenai, under a URL which starts with https://kenai.com/, point your browser there and end up on http://www.oracle.com/splash/kenai.com/decommissioning/index.html and the message "We're sorry the kenai.com site has closed."

All the history and knowledge on JVM internals and how to work there is gone. You can find the blog posts from four years ago on the topic, but the links to the tools are dead.

This is truly awful. It's the best argument I've seen for publishing this info as PDF files with DOI references, where you can move the artifact around, but citeseer will always find it. If the information doesn't last five years, then

The irony is, it means that because Oracle have killed all those inbound links to Java tools, they're telling the kind of developer who wants to know these things to go away. That's strategically short-sighted. I can understand why you'd want to keep the cost of site maintenance down, but really, breaking every single link? It's a major loss to the Java platform —especially as I couldn't even find a replacement.

I did manage to find a copy of the openjdk tarball people send you could D/L and run make on, but it was on a freebsd site, and even after a ./Configure && make, it broke trying to create a bsd dynlib. Then I checked out the full openjdk source tree, branch -8, installed the various tools and tried to build there. Again, some error. I ended up finding a copy of the needed hsdis-amd64.dylib library on Github, but I had to then spend some time looking at evolvedmicrobe's work &c to see if I could trust this to "probably" not be malware itself. I've replicated the JAR in the speculate module, BTW.

Anyway, once the disassembler was done and the other aspects of hotspot JIT compilation clear (if you can't see the method you wrote, run the loop a few thousand more times), I got to see some well annotated x86-64 assembler. Leaving me with a new problem: x86-64 assembler. It's a lot cleaner than classic 32 bit x86: having more registers does that, especially as it gives lots of scope for improving how function parameters and return values are managed.

What next? This is only a spare time bit of work, and now I'm back from my EU-length xmas break, I'm doing other things. Maybe next weekend I'll do some more. At least now I know that exploiting meltdown from the JVM is not going be straightforward.

Also I found it quite interesting playing with this, to see when the JVM kicks out native code, what it looks like. We code so far from the native hardware these days, its too "low level". But the new speculation-side-channel attacks have shown that you'd better understand modern CPU architectures, including how your high-level code gets compiled down.

I think I should submit a berlin buzzwords talk on this topic.

(*) It is traditional to swap the names of the author on every use. If you are a purist you have to remember the last order you used.

Rohit Yadav: SIGN UP ON FIRST PROJECT IT WILL AUTOMATICALLY LOGIN TO ANOTHER PROJECT SERVER

$
0
0

Sign up on first project it will automatically sign up to another project server

In first server code :-

Gemfile
        gem 'rest-client'


Controller

      def create
               # sign up on one server
               @user = User.create(name: params[:user][:name], email: params[:user][:email], password: params[:user][:password], password_confirmation: params[:user][:password_confirmation])

               # after login one server it will automatically on second server
               RestClient::Request.execute(method: :post, url: "https://ipaddress/domain/users", headers: {:params => {"user": {"name": "rohit", "email": "rohityuvasoft149@gmail.com", "password": "12345678", "password_confirmation": "12345678"}}}, verify_ssl: false)
    end
       end

In second server it will be sign up automatically.


Rohit Yadav: GENERATE PDF FILE IN RAILS

$
0
0

Code Generating PDFs From HTML With Rails

Using Wicked_pdf with Rails 5

Step 1

Install the required gems

gem 'wicked_pdf', '~> 1.1'  
gem 'wkhtmltopdf-binary'   

then generate the initializer

rails g wicked_pdf              

Step 2(optional)

You need to register the pdf mime type. This is done in the initializer. This is required for older rails version only

#config/initializers/mime_types.rb

Mime::Type.register "application/pdf", :pdf

Step 3

Setup the controller to render pdf format. In this setup you have the options to configure so many item depends on you requirement. At the moment, only a few items need to be setup for me.

#controllers/yours_controller.rb

def show
   .
   .
   .
   respond_to do |format|
    format.html
    format.pdf do
      render pdf: "Your_filename",
      template: "yours_controller/show.html.erb"
      layout: 'pdf'

    end
   end
end

Step 4

Create and setup the view part of the application

First we create the the new layout for the pdf to use and use the helper from wicked_pdf to make reference to the stylesheet, javascript library or image required. For this example, I will only use the stylesheet helper

#app/views/layouts/pdf.html.erb

<!DOCTYPE html>
<html>
<head>
<title>PDF</title>
  <%= wicked_pdf_stylesheet_link_tag "application" -%>
</head>
<body>

  <div class='container'>
    <%= yield %>
  </div>

</body>
</html>


I my case the css or scss file name is “application”

Next, we can create the link to generate the pdf file

<%= link_to 'Create PDF document', youritem_path(@youritem, :format => :pdf) %>

Rohit Yadav: RAILS: UPLOADING PHOTOS VIA AMAZON S3 AND PAPERCLIP (UPLOADING FILES TO S3 IN RUBY WITH PAPERCLIP)

$
0
0

Set up Ruby on Rails with Paperclip and S3 using AWS SDK
Uploading Files to S3 in Ruby with Paperclip

Paperclip requires the following gems added to your Gemfile.

If your paperclip version is 5.1.0 then we are using 'aws-sdk' version 2.3.

# Gemfile
gem 'paperclip'
gem 'aws-sdk', '~> 2.3'

or our paperclip version is 4.1.0 then we need to use 'aws-sdk' version < 2  (note: add version less than 2.0 otherwise you will get paperclip error)

gem 'paperclip'
gem 'aws-sdk', '< 2.0'


Run bundle install and restart the Rails server after modifying the Gemfile.

Then run the command :-

rails generate paperclip user image

Define the file attribute in the Model

class User < ActiveRecord::Base
    has_attached_file :image, styles: { medium: "300x300>", thumb: "100x100>" }, default_url: "/images/:style/missing.png"
      validates_attachment_content_type :image, content_type: /\Aimage\/.*\z/
end

Migrations: Now our migration file look like :-

class AddAvatarColumnsToUsers < ActiveRecord::Migration
  def up
    add_attachment :users, :image
  end

  def down
    remove_attachment :users, :image
  end
end


View Page :-

<%= form_for @user, url: users_path, html: { multipart: true } do |form| %>
  <%= form.file_field :image %>
<% end %>


Now in the controller :-

def create
  @user = User.create( user_params )
end

private
def user_params
  params.require(:user).permit(:image)
end


In our view page we can show the image using :-

<%= image_tag @user.image.url %>
<%= image_tag @user.image.url(:medium) %>
<%= image_tag @user.image.url(:thumb) %>


After that :- 
S3 bucket Implementation:-
1. Go to Aws Console
2. Create the s3 bucket with bucket name and choose specific region.
3. Get the app_key and app_secret (link: https://www.multcloud.com/tutorials/s3-key.html)


We’ll also need to specify the AWS configuration variables for the development/production Environment.

# config/environments/production.rb
config.paperclip_defaults = {
  storage: :s3,
  s3_credentials: {
    bucket: ENV.fetch('S3_BUCKET_NAME'),
    access_key_id: ENV.fetch('AWS_ACCESS_KEY_ID'),
    secret_access_key: ENV.fetch('AWS_SECRET_ACCESS_KEY'),
    s3_region: ENV.fetch('AWS_REGION'),
  }
}

OR IF YOU ARE CREATING NEW FILE IN CONFIG FOLDER FOR SAVING THE S3 CREDENTIALS THEN WE JUST HAVE TO CHANGE OUR MODEL SETTING AND WE CAN GET THE S3_CREDENTIALS FROM CONFIG FILE :-

Model :-

class User < ActiveRecord::Base
     has_attached_file :image,:styles => { :icon => "50x50>", :small => "150x150", :medium => "300x300>", :thumb => "100x100>" }, :default_url => "/assets/icons/picture.png", :storage => :s3,:s3_credentials => "#{Rails.root}/config/aws_s3.yml",:url => ':s3_domain_url', :path=> ":attachment/:id/:style/:filename"

      validates_attachment_content_type :image, :content_type => /\Aimage\/.*\Z/
end


# config/aws_s3.yml

development:
bucket: ENV.fetch('S3_BUCKET_NAME')
access_key_id: ENV.fetch('AWS_ACCESS_KEY_ID')
secret_access_key: ENV.fetch('AWS_SECRET_ACCESS_KEY')
s3_region: ENV.fetch('AWS_REGION')



WE CAN REFER :-

https://github.com/thoughtbot/paperclip

https://devcenter.heroku.com/articles/paperclip-s3#define-the-file-attribute-in-the-model
https://coderwall.com/p/vv1iwg/set-up-ruby-on-rails-with-paperclip-5-and-s3-using-aws-sdk-v2
http://www.korenlc.com/rails-uploading-photos-via-amazon-s3-and-paperclip/

Justin Mason: Links for 2018-01-09

$
0
0
Viewing all 9364 articles
Browse latest View live