A Brand New Mindcraft Moment

From Champion's League Wiki
Jump to: navigation, search

Posted Nov 6, 2015 20:50 UTC (Fri) by PaXTeam (guest, #24616) [Link]



1. this WP article was the fifth in a series of articles following the security of the internet from its beginnings to relevant topics of right now. discussing the safety of linux (or lack thereof) matches properly in there. it was also a effectively-researched article with over two months of research and interviews, one thing you cannot fairly declare yourself for your latest pieces on the subject. you don't just like the info? then say so. or even better, do something constructive about them like Kees and others have been trying. however foolish comparisons to old crap just like the Mindcraft studies and fueling conspiracies do not precisely assist your case. 2. "We do an inexpensive job of finding and fixing bugs." let's begin here. is this statement based mostly on wishful thinking or chilly laborious information you're going to share in your response? based on Kees, the lifetime of security bugs is measured in years. that's greater than the lifetime of many gadgets individuals buy and use and ditch in that period. 3. "Issues, whether they're safety-associated or not, are patched shortly," some are, some aren't: let's not neglect the current NMI fixes that took over 2 months to trickle right down to stable kernels and we also have a person who has been waiting for over 2 weeks now: http://thread.gmane.org/gmane.comp.file-techniques.btrfs/49500 (FYI, the overflow plugin is the primary one Kees is trying to upstream, imagine the shitstorm if bugreports can be treated with this angle, let's hope btrfs guys are an exception, not the rule). anyway, two examples aren't statistics, so once once more, do you have got numbers or is all of it wishful pondering? (it is partly a trick question as a result of you will even have to elucidate how something gets to be decided to be safety related which as everyone knows is a messy business in the linux world) 4. "and the stable-replace mechanism makes those patches accessible to kernel customers." except when it doesn't. and sure, i've numbers: grsec carries 200+ backported patches in our 3.14 stable tree. 5. "Particularly, the few builders who're working in this space have never made a severe try to get that work integrated upstream." you don't have to be shy about naming us, in any case you did so elsewhere already. and we also defined the the reason why we have not pursued upstreaming our code: https://lwn.internet/Articles/538600/ . since i don't expect you and your readers to learn any of it, here is the tl;dr: if you would like us to spend 1000's of hours of our time to upstream our code, you will have to pay for it. no ifs no buts, that's how the world works, that's how >90% of linux code gets in too. i personally discover it fairly hypocritic that effectively paid kernel developers are bitching about our unwillingness and inability to serve them our code on a silver platter without cost. and before someone brings up the CII, go examine their mail archives, after some initial exploratory discussions i explicitly asked them about supporting this lengthy drawn out upstreaming work and acquired no answers.



Posted Nov 6, 2015 21:39 UTC (Fri) by patrick_g (subscriber, #44470) [Hyperlink]



Cash (aha) quote : > I propose you spend none of your free time on this. minecraft survival servers Zero. I suggest you receives a commission to do this. And properly. Nobody expect you to serve your code on a silver platter free of charge. The Linux basis and large companies utilizing Linux (Google, Purple Hat, Oracle, Samsung, and so on.) should pay safety specialists such as you to upstream your patchs.



Posted Nov 6, 2015 21:57 UTC (Fri) by nirbheek (subscriber, #54111) [Hyperlink]



I'd simply wish to point out that the best way you phrased this makes your remark a tone argument[1][2]; you've got (in all probability unintentionally) dismissed all the dad or mum's arguments by pointing at its presentation. The tone of PAXTeam's remark shows the frustration constructed up over the years with the best way things work which I believe ought to be taken at face value, empathized with, and understood somewhat than merely dismissed. 1. http://rationalwiki.org/wiki/Tone_argument 2. http://geekfeminism.wikia.com/wiki/Tone_argument Cheers,



Posted Nov 7, 2015 0:55 UTC (Sat) by josh (subscriber, #17465) [Link]



Posted Nov 7, 2015 1:21 UTC (Sat) by PaXTeam (guest, #24616) [Link]



why, is upstream recognized for its fundamental civility and decency? have you ever even learn the WP put up beneath discussion, never thoughts previous lkml site visitors?



Posted Nov 7, 2015 5:37 UTC (Sat) by josh (subscriber, #17465) [Hyperlink]



Posted Nov 7, 2015 5:34 UTC (Sat) by gmatht (guest, #58961) [Hyperlink]



No Argument



Posted Nov 7, 2015 6:09 UTC (Sat) by josh (subscriber, #17465) [Hyperlink]



Please don't; it doesn't belong there either, and it particularly does not want a cheering part as the tech press (LWN usually excepted) tends to provide.



Posted Nov 8, 2015 8:36 UTC (Sun) by gmatht (visitor, #58961) [Hyperlink]



Ok, however I used to be thinking of Linus Torvalds



Posted Nov 8, 2015 16:11 UTC (Solar) by pbonzini (subscriber, #60935) [Link]



Posted Nov 6, 2015 22:Forty three UTC (Fri) by PaXTeam (guest, #24616) [Link]



Posted Nov 6, 2015 23:00 UTC (Fri) by pr1268 (subscriber, #24648) [Hyperlink]



Why should you assume only money will fix this drawback? Yes, I agree extra sources must be spent on fixing Linux kernel safety points, but do not assume someone giving an organization (ahem, PAXTeam) money is the one resolution. (Not imply to impugn PAXTeam's safety efforts.)



The Linux improvement group could have had the wool pulled over its collective eyes with respect to safety issues (both actual or perceived), however merely throwing money at the problem won't fix this.



And yes, I do understand the industrial Linux distros do heaps (most?) of the kernel growth lately, and that implies indirect financial transactions, but it's much more involved than just that.



Posted Nov 7, 2015 0:36 UTC (Sat) by PaXTeam (guest, #24616) [Hyperlink]



Posted Nov 7, 2015 7:34 UTC (Sat) by nix (subscriber, #2304) [Hyperlink]



Posted Nov 7, 2015 9:Forty nine UTC (Sat) by PaXTeam (visitor, #24616) [Hyperlink]



Posted Nov 6, 2015 23:Thirteen UTC (Fri) by dowdle (subscriber, #659) [Hyperlink]



I think you undoubtedly agree with the gist of Jon's argument... not sufficient focus has been given to safety in the Linux kernel... the article gets that part right... cash hasn't been going in the direction of safety... and now it must. Aren't you glad?



Posted Nov 7, 2015 1:37 UTC (Sat) by PaXTeam (visitor, #24616) [Hyperlink]



they talked to spender, not me personally, however yes, this side of the coin is properly represented by us and others who were interviewed. the same method Linus is a good representative of, nicely, his own pet project referred to as linux. > And if Jon had solely talked to you, his would have been too. provided that i am the writer of PaX (part of grsec) sure, talking to me about grsec matters makes it one of the best methods to research it. but when you understand of another person, be my guest and identify them, i am fairly positive the recently formed kernel self-safety of us would be dying to have interaction them (or not, i don't suppose there's a sucker out there with thousands of hours of free time on their hand). > [...]it also contained fairly a couple of of groan-worthy statements. nothing is ideal however considering the viewers of the WP, that is one among the better journalistic pieces on the subject, regardless of the way you and others do not like the sorry state of linux security exposed in there. if you would like to discuss more technical particulars, nothing stops you from talking to us ;). talking of your complaints about journalistic qualities, since a previous LWN article saw it fit to include several typical dismissive claims by Linus about the quality of unspecified grsec features with no evidence of what expertise he had with the code and how latest it was, how come we didn't see you or anybody else complaining about the standard of that article? > Aren't you glad? no, or not yet anyway. i've heard plenty of empty phrases through the years and nothing ever manifested or worse, all the cash has gone to the pointless exercise of fixing particular person bugs and associated circus (that Linus rightfully despises FWIW).



Posted Nov 7, 2015 0:18 UTC (Sat) by bojan (subscriber, #14302) [Hyperlink]



Posted Nov 8, 2015 13:06 UTC (Solar) by k3ninho (subscriber, #50375) [Hyperlink]



Right now we have got builders from large names saying that doing all that the Linux ecosystem does *safely* is an itch that they've. Sadly, the encircling cultural attitude of developers is to hit purposeful targets, and often efficiency goals. Security goals are often ignored. Ideally, the tradition would shift in order that we make it troublesome to observe insecure habits, patterns or paradigms -- that is a job that can take a sustained effort, not merely the upstreaming of patches. Whatever the culture, these patches will go upstream finally anyway as a result of the ideas that they embody are now well timed. I can see a way to make it happen: Linus will accept them when a giant finish-consumer (say, Intel, Google, Fb or Amazon) delivers stuff with notes like 'here is a set of enhancements, we're already utilizing them to resolve this kind of drawback, this is how every thing will remain working as a result of $proof, word carefully that you're staring down the barrels of a fork because your tree is now evolutionarily disadvantaged'. It's a game and will be gamed; I might prefer that the group shepherds customers to follow the pattern of declaring downside + resolution + practical check evidence + performance test proof + security check proof. K3n.



Posted Nov 9, 2015 6:49 UTC (Mon) by jospoortvliet (visitor, #33164) [Hyperlink]



And about that fork barrel: I would argue it's the opposite approach around. Google forked and misplaced already.



Posted Nov 12, 2015 6:25 UTC (Thu) by Garak (guest, #99377) [Hyperlink]



Posted Nov 23, 2015 6:33 UTC (Mon) by jospoortvliet (visitor, #33164) [Hyperlink]



Posted Nov 7, 2015 3:20 UTC (Sat) by corbet (editor, #1) [Hyperlink]



So I have to confess to a certain amount of confusion. I may swear that the article I wrote mentioned precisely that, but you've got put a fair quantity of effort into flaming it...?



Posted Nov 8, 2015 1:34 UTC (Solar) by PaXTeam (visitor, #24616) [Hyperlink]



Posted Nov 6, 2015 22:Fifty two UTC (Fri) by flussence (subscriber, #85566) [Hyperlink]



I personally assume you and Nick Krause share opposite sides of the identical coin. Programming capability and basic civility.



Posted Nov 6, 2015 22:59 UTC (Fri) by dowdle (subscriber, #659) [Hyperlink]



Posted Nov 7, 2015 0:Sixteen UTC (Sat) by rahvin (visitor, #16953) [Link]



I hope I'm unsuitable, but a hostile attitude isn't going to assist anyone receives a commission. It is a time like this the place something you seem to be an "professional" at and there's a demand for that experience where you show cooperation and willingness to take part as a result of it is an opportunity. I am relatively shocked that someone doesn't get that, but I am older and have seen a number of of these opportunities in my profession and exploited the hell out of them. You solely get a couple of of these in the common career, and handful at the most. Sometimes it's important to invest in proving your skills, and that is a type of moments. It appears the Kernel neighborhood could finally take this safety lesson to heart and embrace it, as stated in the article as a "mindcraft moment". This is an opportunity for developers that may need to work on Linux security. Some will exploit the opportunity and others will thumb their noses at it. In the end these developers that exploit the opportunity will prosper from it. I really feel previous even having to jot down that.



Posted Nov 7, 2015 1:00 UTC (Sat) by josh (subscriber, #17465) [Hyperlink]



Perhaps there is a chicken and egg drawback here, but when looking for out and funding individuals to get code upstream, it helps to pick people and teams with a historical past of with the ability to get code upstream. It's completely affordable to prefer working out of tree, offering the power to develop impressive and demanding security advances unconstrained by upstream requirements. That is work someone may additionally wish to fund, if that meets their wants.



Posted Nov 7, 2015 1:28 UTC (Sat) by PaXTeam (guest, #24616) [Hyperlink]



Posted Nov 7, 2015 19:12 UTC (Sat) by jejb (subscriber, #6654) [Link]



You make this argument (implying you do analysis and Josh does not) and then fail to support it by any cite. It can be much more convincing if you happen to surrender on the Onus probandi rhetorical fallacy and really cite details. > working example, it was *them* who steered that they wouldn't fund out-of-tree work but would consider funding upstreaming work, besides when pressed for the small print, all i got was silence. For those following alongside at residence, that is the relevant set of threads: http://lists.coreinfrastructure.org/pipermail/cii-discuss... A quick precis is that they informed you your challenge was unhealthy because the code was never going upstream. You instructed them it was due to kernel builders angle so they need to fund you anyway. They informed you to submit a grant proposal, you whined extra concerning the kernel attitudes and ultimately even your apologist told you that submitting a proposal might be the best thing to do. At that point you went silent, not vice versa as you imply above. > obviously i will not spend time to write up a begging proposal simply to be instructed that 'no sorry, we do not fund multi-yr projects in any respect'. that is one thing that one needs to be advised in advance (or heck, be a part of some public guidelines so that others will know the rules too). You seem to have a fatally flawed grasp of how public funding works. If you don't tell people why you need the money and how you will spend it, they're unlikely to disburse. Saying I am sensible and I know the problem now hand over the cash does not even work for many Teachers who've a solid fame in the field; which is why most of them spend >30% of their time writing grant proposals. > as for getting code upstream, how about you verify the kernel git logs (minus the stuff that was not properly credited)? jejb@jarvis> git log|grep -i 'Creator: pax.*staff'|wc -l 1 Stellar, I have to say. And before you mild off on those who've misappropriated your credit score, please remember that getting code upstream on behalf of reluctant or incapable actors is a hugely helpful and time consuming talent and one in every of the explanations teams like Linaro exist and are well funded. If extra of your stuff does go upstream, it will likely be due to the not inconsiderable efforts of different people on this area. You now have a enterprise mannequin selling non-upstream security patches to customers. There's nothing unsuitable with that, it's a fairly regular first stage enterprise model, however it does relatively rely upon patches not being upstream in the primary place, calling into query the earnestness of your try to put them there. Now this is some free advice in my subject, which is assisting companies align their companies in open source: The selling out of tree patch route is all the time an eventual failure, notably with the kernel, as a result of if the performance is that helpful, it gets upstreamed or reinvented in your regardless of, leaving you with nothing to sell. If your business plan B is selling experience, you could have to bear in mind that it is going to be a tough promote when you've no out of tree differentiator left and git historical past denies that you just had anything to do with the in-tree patches. Actually "crazy safety individual" will develop into a self fulfilling prophecy. The advice? it was apparent to everyone else who read this, but for you, it's do the upstreaming your self before it gets done for you. That manner you've gotten a reputable historical declare to Plan B and also you would possibly also have a Plan A promoting a rollup of upstream track patches integrated and delivered earlier than the distributions get round to it. Even your software to the CII couldn't be dismissed because your work wasn't going wherever. Your alternative is to continue taking part in the position of Cassandra and possibly suffer her eventual destiny.



Posted Nov 7, 2015 23:20 UTC (Sat) by PaXTeam (guest, #24616) [Link]



> Second, for the doubtlessly viable pieces this would be a multi-year > full time job. Is the CII keen to fund projects at that level? If not > we all would end up with lots of unfinished and partially broken features. please show me the reply to that query. with no definitive 'yes' there is no such thing as a level in submitting a proposal as a result of that is the time frame that for my part the job will take and any proposal with that requirement would be shot down instantly and be a waste of my time. and i stand by my declare that such easy primary requirements ought to be public data. > Stellar, I must say. "Lies, damned lies, and statistics". you notice there's multiple way to get code into the kernel? how about you employ your git-fu to find all the bugreports/recommended fixes that went in on account of us? as for particularly me, Greg explicitly banned me from future contributions by way of af45f32d25cc1 so it's no wonder i don't send patches instantly in (and that one commit you found that went in regardless of mentioned ban is actually a very bad example as a result of additionally it is the one which Linus censored for no good purpose and made me resolve to by no means send safety fixes upstream until that observe changes). > You now have a enterprise mannequin promoting non-upstream security patches to customers. now? we have had paid sponsorship for our varied stable kernel collection for 7 years. i would not call it a enterprise mannequin although because it hasn't paid anyone's bills. > [...]calling into question the earnestness of your attempt to put them there. i should be lacking something here but what attempt? i've by no means in my life tried to submit PaX upstream (for all the reasons discussed already). the CII mails were exploratory to see how serious that whole group is about actually securing core infrastructure. in a sense i've bought my solutions, there's nothing extra to the story. as to your free recommendation, let me reciprocate: advanced issues don't remedy themselves. code fixing advanced problems would not write itself. individuals writing code solving complicated issues are few and far between that one can find out in short order. such folks (domain specialists) don't work for free with few exceptions like ourselves. biting the hand that feeds you will only end you up in starvation. PS: since you are so certain about kernel developers' skill to reimplement our code, maybe have a look at what parallel options i nonetheless maintain in PaX regardless of vanilla having a 'completely-not-reinvented-here' implementation and take a look at to know the rationale. or simply have a look at all of the CVEs that affected say vanilla's ASLR but did not have an effect on mine. PPS: Cassandra never wrote code, i do. criticizing the sorry state of kernel security is a facet undertaking when i am bored or just waiting for the following kernel to compile (i wish LTO was more efficient).



Posted Nov 8, 2015 2:28 UTC (Sun) by jejb (subscriber, #6654) [Hyperlink]



In different phrases, you tried to outline their course of for them ... I am unable to assume why that wouldn't work. > "Lies, damned lies, and statistics". The issue with advert hominem attacks is that they are singularly ineffective in opposition to a transparently factual argument. I posted a one line command anyone could run to get the number of patches you've got authored in the kernel. Why don't you post an equal that gives figures you like extra? > i've never in my life tried to submit PaX upstream (for all the reasons mentioned already). So the master plan is to show your experience by the number of patches you haven't submitted? great plan, world domination beckons, sorry that one got away from you, but I'm positive you won't let it occur once more.



Posted Nov 8, 2015 2:56 UTC (Sun) by PaXTeam (guest, #24616) [Hyperlink]



what? since when does asking a question define anything? isn't that how we find out what someone else thinks? is not that what *they* have that webform (by no means mind the mailing lists) for as nicely? in other phrases you admit that my query was not actually answered . > The issue with advert hominem assaults is that they are singularly ineffective against a transparently factual argument. you didn't have an argument to begin with, that is what i defined in the part you carefully chose not to quote. i'm not here to defend myself in opposition to your clearly idiotic attempts at proving no matter you are attempting to show, as they are saying even in kernel circles, code speaks, bullshit walks. you'll be able to take a look at mine and determine what i can or can not do (not that you've the data to know most of it, mind you). that said, there're clearly different extra capable folks who have performed so and determined that my/our work was price something else no one would have been feeding off of it for the past 15 years and still counting. and as unimaginable as it may appear to you, life doesn't revolve across the vanilla kernel, not everyone's dying to get their code in there particularly when it means to put up with such silly hostility on lkml that you now also demonstrated right here (it is ironic how you got here to the protection of josh who particularly asked people not to carry that notorious lkml model right here. good job there James.). as for world domination, there're many ways to achieve it and something tells me that you are clearly out of your league here since PaX has already achieved that. you are operating such code that implements PaX options as we converse.



Posted Nov 8, 2015 16:Fifty two UTC (Solar) by jejb (subscriber, #6654) [Link]



I posted the one line git script giving your authored patches in response to this original request by you (this one, just in case you have forgotten http://lwn.web/Articles/663591/): > as for getting code upstream, how about you check the kernel git logs (minus the stuff that was not properly credited)? I take it, by the way you have shifted floor in the previous threads, that you just want to withdraw that request?



Posted Nov 8, 2015 19:31 UTC (Solar) by PaXTeam (guest, #24616) [Hyperlink]



Posted Nov 8, 2015 22:31 UTC (Solar) by pizza (subscriber, #46) [Link]



Please present one that's not incorrect, or much less wrong. It should take less time than you've got already wasted right here.



Posted Nov 8, 2015 22:49 UTC (Sun) by PaXTeam (guest, #24616) [Link]



anyway, since it is you guys who have a bee in your bonnet, let's test your level of intelligence too. first work out my email deal with and venture title then try to seek out the commits that say they arrive from there (it brought back some memories from 2004 already, how instances flies! i'm stunned i actually managed to perform this much with explicitly not attempting, think about if i did :). it is an extremely advanced activity so by undertaking it you may show your self to be the highest dog right here on lwn, whatever that's worth ;).



Posted Nov 8, 2015 23:25 UTC (Sun) by pizza (subscriber, #46) [Hyperlink]



*shrug* Or don't; you are only sullying your own status.



Posted Nov 9, 2015 7:08 UTC (Mon) by jospoortvliet (visitor, #33164) [Link]



Posted Nov 9, 2015 11:38 UTC (Mon) by hkario (subscriber, #94864) [Link]



I would not either



Posted Nov 12, 2015 2:09 UTC (Thu) by jschrod (subscriber, #1646) [Hyperlink]



Posted Nov 12, 2015 8:50 UTC (Thu) by nwmcsween (visitor, #62367) [Hyperlink]



Posted Nov 8, 2015 3:38 UTC (Sun) by PaXTeam (visitor, #24616) [Link]



Posted Nov 12, 2015 13:Forty seven UTC (Thu) by nix (subscriber, #2304) [Link]



Ah. I thought my reminiscence wasn't failing me. Examine to PaXTeam's response to <http: lwn.net articles 663612 />. PaXTeam just isn't averse to outright lying if it means he will get to appear right, I see. Possibly PaXTeam's memory is failing, and this apparent contradiction is just not a brazen lie, however provided that the 2 posts were made within a day of each other I doubt it. (PaXTeam's complete unwillingness to assume good faith in others deserves some reflection. Sure, I *do* think he is mendacity by implication right here, and doing so when there's almost nothing at stake. God alone is aware of what he is willing to stoop to when one thing *is* at stake. Gosh I ponder why his fixes aren't going upstream very fast.)



Posted Nov 12, 2015 14:Eleven UTC (Thu) by PaXTeam (visitor, #24616) [Hyperlink]



> and that one commit you discovered that went in regardless of said ban additionally someone's ban doesn't suggest it will translate into another person's execution of that ban as it is clear from the commit in query. it's somewhat unhappy that it takes a security repair to expose the fallacy of this coverage although. the rest of your pithy ad hominem speaks for itself higher than i ever may ;).



Posted Nov 12, 2015 15:58 UTC (Thu) by andreashappe (subscriber, #4810) [Hyperlink]



Posted Nov 7, 2015 19:01 UTC (Sat) by cwillu (guest, #67268) [Link]



I do not see this message in my mailbox, so presumably it received swallowed.



Posted Nov 7, 2015 22:33 UTC (Sat) by ssmith32 (subscriber, #72404) [Link]



You're conscious that it's fully doable that everyone seems to be wrong right here , right? That the kernel maintainers must focus more on security, that the article was biased, that you are irresponsible to decry the state of safety, and do nothing to help, and that your patchsets wouldn't help that much and are the incorrect path for the kernel? That simply because the kernel maintainers aren't 100% proper it doesn't mean you might be?



Posted Nov 9, 2015 9:50 UTC (Mon) by njd27 (guest, #5770) [Link]



I believe you have got him backwards there. Jon is comparing this to Mindcraft because he thinks that regardless of being unpalatable to lots of the neighborhood, the article might the truth is comprise a lot of reality.



Posted Nov 9, 2015 14:03 UTC (Mon) by corbet (editor, #1) [Link]



Posted Nov 9, 2015 15:13 UTC (Mon) by spender (visitor, #23067) [Link]



"There are rumors of darkish forces that drove the article in the hopes of taking Linux down a notch. All of this might effectively be true" Just as you criticized the article for mentioning Ashley Madison although in the very first sentence of the following paragraph it mentions it did not involve the Linux kernel, you cannot give credence to conspiracy theories with out incurring the same criticism (in different phrases, you cannot play the Glenn Beck "I am just asking the questions right here!" whose "questions" fuel the conspiracy theories of others). Very like mentioning Ashley Madison for instance for non-technical readers in regards to the prevalence of Linux on this planet, if you are criticizing the point out then shouldn't likening a non-FUD article to a FUD article additionally deserve criticism, especially given the rosy, self-congratulatory image you painted of upstream Linux security? As the PaX Group pointed out in the preliminary publish, the motivations aren't exhausting to know -- you made no point out in any respect about it being the fifth in an extended-running series following a reasonably predictable time trajectory. No, we did not miss the general analogy you have been trying to make, we just don't think you may have your cake and eat it too. -Brad



Posted Nov 9, 2015 15:18 UTC (Mon) by karath (subscriber, #19025) [Link]



Posted Nov 9, 2015 17:06 UTC (Mon) by k3ninho (subscriber, #50375) [Hyperlink]



It's gracious of you not to blame your readers. I determine they're a good goal: there's that line about those ignorant of historical past being condemned to re-implement Unix -- as your readers are! :-) K3n.



Posted Nov 9, 2015 18:Forty three UTC (Mon) by bojan (subscriber, #14302) [Hyperlink]



Unfortunately, I do not understand neither the "safety" folks (PaXTeam/spender), nor the mainstream kernel people when it comes to their attitude. I confess I've totally no technical capabilities on any of those subjects, but if they all determined to work together, instead of having countless and pointless flame wars and blame game exchanges, numerous the stuff would have been done already. And all of the whereas everybody concerned could have made one other huge pile of money on the stuff. All of them appear to need to have a greater Linux kernel, so I've got no concept what the problem is. It seems that no person is willing to yield any of their positions even a little bit. Instead, both sides appear to be bent on attempting to insult their approach into forcing the opposite aspect to hand over. Which, after all, never works - it simply causes extra pushback. Perplexing stuff...



Posted Nov 9, 2015 19:00 UTC (Mon) by sfeam (subscriber, #2841) [Hyperlink]



Posted Nov 9, 2015 19:Forty four UTC (Mon) by bojan (subscriber, #14302) [Hyperlink]



Take a scientific computational cluster with an "air gap", for example. You'd in all probability want most of the safety stuff turned off on it to realize maximum performance, as a result of you can trust all users. Now take just a few billion cellphones that could be tough or slow to patch. You'd probably wish to kill most of the exploit courses there, if those gadgets can nonetheless run moderately effectively with most safety features turned on. So, it's not both/or. It's probably "it depends". But, if the stuff is not there for everyone to compile/use within the vanilla kernel, it will be more difficult to make it part of everyday decisions for distributors and users.



Posted Nov 6, 2015 22:20 UTC (Fri) by artem (subscriber, #51262) [Hyperlink]



How sad. This Dijkstra quote comes to thoughts immediately: Software engineering, of course, presents itself as one other worthy cause, however that's eyewash: if you happen to carefully read its literature and analyse what its devotees actually do, you'll uncover that software engineering has accepted as its charter "Easy methods to program if you can not."



Posted Nov 7, 2015 0:35 UTC (Sat) by roc (subscriber, #30627) [Link]



I assume that fact was too unpleasant to suit into Dijkstra's world view.



Posted Nov 7, 2015 10:Fifty two UTC (Sat) by ms (subscriber, #41272) [Hyperlink]



Indeed. And the interesting thing to me is that when I reach that time, exams will not be adequate - mannequin checking at a minimal and really proofs are the only means forwards. I'm no security knowledgeable, my discipline is all distributed systems. I perceive and have carried out Paxos and that i imagine I can explain how and why it works to anybody. But I am at the moment performing some algorithms combining Paxos with a bunch of variations on VectorClocks and reasoning about causality and consensus. No check is adequate because there are infinite interleavings of occasions and my head just could not cope with engaged on this both at the computer or on paper - I discovered I couldn't intuitively reason about these things at all. So I started defining the properties and wished and step by step proving why each of them holds. With out my notes and proofs I am unable to even explain to myself, let alone anyone else, why this factor works. I discover this each utterly apparent that this could happen and totally terrifying - the upkeep price of these algorithms is now an order of magnitude higher.



Posted Nov 19, 2015 12:24 UTC (Thu) by Wol (subscriber, #4433) [Hyperlink]



> Certainly. And the fascinating factor to me is that once I reach that point, exams should not adequate - mannequin checking at a minimum and actually proofs are the one means forwards. Or are you just utilizing the mistaken maths? Hobbyhorse time again :-) but to quote a fellow Pick developer ... "I often walk into a SQL growth store and see that wall - you recognize, the one with the large SQL schema that no-one absolutely understands on it - and surprise how I can simply hold the complete schema for a Pick database of the identical or greater complexity in my head". However it is simple - by training I am a Chemist, by curiosity a Bodily Chemist (and by profession an unemployed programmer :-). And when I am eager about chemistry, I can ask myself "what is an atom manufactured from" and think about things just like the sturdy nuclear pressure. Next level up, how do atoms stick collectively and make molecules, and think about the electroweak power and electron orbitals, and how do chemical reactions occur. Then I feel about molecules stick together to make supplies, and suppose about metals, and/or Van de Waals, and stuff. Level is, you could *layer* stuff, and look at issues, and say "how can I cut up elements off into 'black containers' so at anyone level I can assume the opposite ranges 'simply work'". For instance, with Choose a FILE (desk to you) stores a class - a collection of identical objects. One object per Record (row). And, same as relational, one attribute per Field (column). Are you able to map your relational tables to reality so simply? :-) Going back THIRTY years, I remember a narrative about a man who constructed little pc crabs, that could fairly fortunately scuttle around in the surf zone. As a result of he didn't try to work out how to solve all the issues at once - each of his (incredibly puny by today's requirements - this is the 8080/Z80 era!) processors was set to only process a bit of bit of the problem and there was no central "brain". However it worked ... Possibly you need to just write a bunch of small modules to resolve every individual drawback, and let final reply "just happen". Cheers, Wol



Posted Nov 19, 2015 19:28 UTC (Thu) by ksandstr (visitor, #60862) [Hyperlink]



To my understanding, this is exactly what a mathematical abstraction does. For instance in Z notation we might construct schemas for the varied modifying ("delta") operations on the bottom schema, and then argue about preservation of formal invariants, properties of the result, and transitivity of the operation when chained with itself, or the preceding aggregate schema composed of schemas A by O (for which they've been already argued). The end result is a set of operations that, executed in arbitrary order, end in a set of properties holding for the consequence and outputs. Thus proving the formal design appropriate (w/ caveat lectors concerning scope, correspondence with its implementation [although that can be confirmed as effectively], and browse-solely ["xi"] operations).



Posted Nov 20, 2015 11:23 UTC (Fri) by Wol (subscriber, #4433) [Link]



Wanting via the historical past of computing (and possibly loads of different fields too), you may probably find that folks "cannot see the wooden for the trees" extra often that not. They dive into the detail and fully miss the massive picture. (Medicine, and curiosity of mine, suffers from that too - I remember any person speaking concerning the marketing consultant eager to amputate a gangrenous leg to avoid wasting someone's life - oblivious to the fact that the patient was dying of most cancers.) Cheers, Wol



Posted Nov 7, 2015 6:35 UTC (Sat) by dgc (subscriber, #6611) [Link]



https://www.youtube.com/watch?v=VpuVDfSXs-g (LCA 2015 - "Programming Thought-about Dangerous") FWIW, I think that this talk is very relevant to why writing secure software is so onerous.. -Dave.



Posted Nov 7, 2015 5:Forty nine UTC (Sat) by kunitz (subscriber, #3965) [Hyperlink]



Whereas we're spending tens of millions at a mess of security issues, kernel issues are not on our top-priority checklist. Actually I remember solely as soon as having discussing a kernel vulnerability. The results of the analysis has been that all our programs had been running kernels that were older as the kernel that had the vulnerability. However "patch administration" is a real subject for us. Software must proceed to work if we set up safety patches or update to new releases because of the top-of-life policy of a vendor. The revenue of the corporate is depending on the IT systems working. So "not breaking user area" is a safety function for us, because a breakage of one part of our several ten 1000's of Linux methods will stop the roll-out of the security replace. One other drawback is embedded software program or firmware. Today virtually all hardware programs include an working system, typically some Linux model, offering a fill network stack embedded to assist distant administration. Repeatedly these programs do not survive our obligatory safety scan, because distributors still didn't update the embedded openssl. The true problem is to offer a software stack that can be operated in the hostile atmosphere of the Web maintaining full system integrity for ten years and even longer without any customer maintenance. The present state of software program engineering would require support for an automated replace process, but vendors should understand that their business model must have the ability to finance the resources providing the updates. Total I'm optimistic, networked software just isn't the first know-how utilized by mankind inflicting problems that were addressed later. Steam engine use could result in boiler explosions however the "engineers" had been able to scale back this danger significantly over a couple of decades.



Posted Nov 7, 2015 10:29 UTC (Sat) by ms (subscriber, #41272) [Link]



The next is all guess work; I might be eager to know if others have evidence either a method or another on this: The individuals who learn how to hack into these methods via kernel vulnerabilities know that they skills they've learnt have a market. Thus they don't tend to hack with a purpose to wreak havoc - certainly on the entire where knowledge has been stolen with a view to launch and embarrass individuals, it _seems_ as though those hacks are by way of much simpler vectors. I.e. lesser skilled hackers find there's a complete load of low-hanging fruit which they can get at. They are not being paid forward of time for the data, so that they turn to extortion as an alternative. They don't cover their tracks, and they'll typically be discovered and charged with criminal offences. So if your safety meets a sure basic level of proficiency and/or your organization isn't doing anything that puts it near the highest of "corporations we might prefer to embarrass" (I suspect the latter is far simpler at conserving programs "safe" than the previous), then the hackers that get into your system are likely to be expert, paid, and doubtless not going to do much harm - they're stealing information for a competitor / state. So that does not hassle your bottom line - at least not in a approach which your shareholders will remember of. So why fund safety?



Posted Nov 7, 2015 17:02 UTC (Sat) by citypw (visitor, #82661) [Hyperlink]



However, some effective mitigation in kernel degree can be very useful to crush cybercriminal/skiddie's attempt. If one among your customer running a future trading platform exposes some open API to their purchasers, and if the server has some reminiscence corruption bugs may be exploited remotely. Then you know there are recognized attack strategies( similar to offset2lib) can assist the attacker make the weaponized exploit a lot simpler. Will you explain the failosophy "A bug is bug" to your customer and tell them it would be okay? Btw, offset2lib is useless to PaX/Grsecurity's ASLR imp. To essentially the most industrial makes use of, extra safety mitigation within the software won't cost you extra finances. You will still need to do the regression check for each upgrade.



Posted Nov 12, 2015 16:14 UTC (Thu) by andreashappe (subscriber, #4810) [Hyperlink]



Take into account that I concentrate on exterior internet-based mostly penetration-assessments and that in-house checks (native LAN) will doubtless yield different outcomes.



Posted Nov 7, 2015 20:33 UTC (Sat) by mattdm (subscriber, #18) [Link]



I keep reading this headline as "a new Minecraft moment", and thinking that maybe they've decided to follow up the .Net thing by open-sourcing Minecraft. Oh properly. I imply, security is nice too, I guess.



Posted Nov 7, 2015 22:24 UTC (Sat) by ssmith32 (subscriber, #72404) [Hyperlink]



Posted Nov 12, 2015 17:29 UTC (Thu) by smitty_one_each (subscriber, #28989) [Link]



Posted Nov 8, 2015 10:34 UTC (Solar) by jcm (subscriber, #18262) [Hyperlink]



Posted Nov 9, 2015 7:15 UTC (Mon) by jospoortvliet (guest, #33164) [Hyperlink]



Posted Nov 9, 2015 15:Fifty three UTC (Mon) by neiljerram (subscriber, #12005) [Hyperlink]



(Oh, and I was additionally nonetheless questioning how Minecraft had taught us about Linux performance - so because of the other comment thread that identified the 'd', not 'e'.)



Posted Nov 9, 2015 11:31 UTC (Mon) by ortalo (visitor, #4654) [Hyperlink]



I might just like to add that in my view, there's a general downside with the economics of laptop safety, which is very seen currently. Two problems even maybe. First, the money spent on computer security is usually diverted towards the so-referred to as security "circus": quick, straightforward solutions that are primarily chosen just with a purpose to "do something" and get better press. It took me a long time - possibly decades - to claim that no security mechanism in any respect is better than a bad mechanism. However now I firmly imagine in this perspective and would moderately take the danger knowingly (provided that I can save cash/useful resource for myself) than take a foul method at fixing it (and have no cash/resource left when i notice I ought to have accomplished one thing else). And that i discover there are numerous bad or incomplete approaches currently available in the computer security discipline. Those spilling our rare cash/assets on prepared-made ineffective instruments should get the dangerous press they deserve. And, we definitely have to enlighten the press on that as a result of it is not so easy to appreciate the effectivity of safety mechanisms (which, by definition, ought to stop issues from occurring). Second, and that may be more moderen and more worrying. The circulation of cash/useful resource is oriented within the path of assault tools and vulnerabilities discovery a lot greater than within the course of recent safety mechanisms. This is especially worrying as cyber "defense" initiatives look increasingly like the same old idustrial projects aimed at producing weapons or intelligence techniques. Moreover, dangerous ineffective weapons, because they're only working in opposition to our very susceptible current programs; and dangerous intelligence systems as even basic school-stage encryption scares them down to useless. Nonetheless, all of the ressources are for these adult teenagers playing the white hat hackers with not-so-tough programming tricks or network monitoring or WWI-stage cryptanalysis. And now additionally for the cyberwarriors and cyberspies that have yet to show their usefulness totally (especially for peace safety...). Personnally, I would fortunately depart them all the hype; however I'll forcefully declare that they don't have any proper whatsoever on any of the budget allocation decisions. Only these engaged on protection should. And yep, it means we must always resolve the place to put there resources. We have now to assert the exclusive lock for ourselves this time. (and I suppose the PaXteam could be amongst the first to learn from such a change). Whereas interested by it, I would not even depart white-hat or cyber-guys any hype in the long run. That's more publicity than they deserve. I crave for the day I'll read in the newspaper that: "One other of these sick advised debutant programmer hooligans that pretend to be cyber-pirates/warriors modified some well-known virus program code exploiting a programmer mistake and managed nevertheless to bring one of those unfinished and bad quality packages, X, that we are all obliged to make use of to its knees, annoying millions of standard customers along with his unfortunate cyber-vandalism. All of the protection consultants unanimously advocate that, once again, the budget of the cyber-command be retargetted, or at the very least leveled-off, in an effort to convey more safety engineer positions in the academic domain or civilian trade. And that X's producer, XY Inc., be liable for the potential losses if proved to be unprofessional on this affair."



Hmmm - cyber-hooligans - I just like the label. Although it doesn't apply well to the battlefield-oriented variant.



Posted Nov 9, 2015 14:28 UTC (Mon) by drag (guest, #31333) [Hyperlink]



The state of 'software security business' is a f-ng catastrophe. Failure of the highest order. There is massive amounts of money that is going into 'cyber security', but it's usually spent on authorities compliance and audit efforts. This means instead of really placing effort into correcting issues and mitigating future issues, nearly all of the effort goes into taking existing functions and making them conform to committee-pushed tips with the minimal quantity of effort and adjustments. Some degree of regulation and standardization is completely needed, but lay individuals are clueless and are completely unable to discern the distinction between any person who has useful expertise versus some firm that has spent hundreds of thousands on slick marketing and 'native advertising' on massive web sites and laptop magazines. The folks with the money sadly only have their very own judgment to depend on when buying into 'cyber safety'. > Those spilling our rare money/sources on ready-made ineffective instruments ought to get the dangerous press they deserve. There isn't any such factor as 'our rare cash/resources'. You will have your cash, I have mine. Money being spent by some corporation like Redhat is their cash. Money being spent by governments is the federal government's cash. (you, actually, have far more management in how Walmart spends it is money then over what your government does with their's) > This is very worrying as cyber "defense" initiatives look more and more like the usual idustrial projects aimed at producing weapons or intelligence systems. Moreover, unhealthy useless weapons, as a result of they're solely working towards our very susceptible current programs; and dangerous intelligence techniques as even basic college-level encryption scares them all the way down to useless. Having safe software with strong encryption mechanisms within the hands of the general public runs counter to the interests of most main governments. Governments, like some other for-profit organization, are primarily fascinated about self-preservation. Cash spent on drone initiatives or banking auditing/oversight regulation compliance is Much more priceless to them then making an attempt to help the general public have a safe mechanism for making cellphone calls. Particularly when these secure mechanisms interfere with information collection efforts. Sadly you/I/us can't depend on some magical benefactor with deep pockets to sweep in and make Linux better. It's just not going to occur. Companies like Redhat have been massively useful to spending sources to make Linux kernel more succesful.. however they are driven by a the necessity to show a revenue, which suggests they need to cater on to the the sort of necessities established by their customer base. Clients for EL are usually much more focused on reducing costs associated with administration and software development then safety on the low-degree OS. Enterprise Linux customers are likely to rely on bodily, human policy, and community safety to protect their 'delicate' interiors from being exposed to exterior threats.. assuming (rightly) that there is little or no they can do to really harden their methods. In reality when the choice comes between safety vs convenience I am certain that almost all clients will happily defeat or strip out any safety mechanisms launched into Linux. On top of that when most Enterprise software program is extraordinarily bad. A lot in order that 10 hours spent on enhancing an internet entrance-end will yield more actual-world safety advantages then a 1000 hours spent on Linux kernel bugs for most businesses. Even for 'normal' Linux customers a security bug in their Firefox's NAPI flash plugin is way more devastating and poses a massively increased threat then a obscure Linux kernel buffer over stream problem. It's just not really important for attackers to get 'root' to get entry to the important info... usually all of which is contained in a single person account. Ultimately it is as much as people such as you and myself to put the trouble and money into bettering Linux safety. For each ourselves and other folks.



Posted Nov 10, 2015 11:05 UTC (Tue) by ortalo (guest, #4654) [Link]



Spilling has always been the case, however now, to me and in pc security, most of the cash seems spilled as a result of bad faith. And this is usually your cash or mine: both tax-fueled governemental assets or company costs which might be straight reimputed on the prices of goods/software program we're told we are *obliged* to purchase. (Take a look at corporate firewalls, dwelling alarms or antivirus software marketing discourse.) I think it's time to point out that there are a number of "malicious malefactors" round and that there is a real must establish and sanction them and confiscate the resources they have by some means managed to monopolize. And i do *not* think Linus is amongst such culprits by the best way. However I believe he could also be amongst those hiding their heads within the sand in regards to the aforementioned evil actors, while he probably has more leverage to counteract them or oblige them to reveal themselves than many of us. I discover that to be of brown-paper-bag level (although head-in-the-sand is somehow a brand new interpretation). In the end, I think you might be proper to say that currently it's solely up to us people to attempt truthfully to do one thing to enhance Linux or computer safety. However I still think that I'm right to say that this isn't normal; especially whereas some very critical people get very critical salaries to distribute randomly some difficult to guage budgets. [1] A paradoxical situation while you give it some thought: in a site where you might be before everything preoccupied by malicious individuals everyone ought to have factual, transparent and trustworthy behavior as the first priority of their mind.



Posted Nov 9, 2015 15:Forty seven UTC (Mon) by MarcB (subscriber, #101804) [Hyperlink]



It even has a pleasant, seven line Fundamental-pseudo-code that describes the present scenario and clearly reveals that we are caught in an infinite loop. It does not reply the massive question, though: How to write down better software program. The unhappy factor is, that this is from 2005 and all of the issues that were clearly silly ideas 10 years in the past have proliferated even more.



Posted Nov 10, 2015 11:20 UTC (Tue) by ortalo (guest, #4654) [Hyperlink]



Notice IMHO, we should always examine further why these dumb issues proliferate and get so much assist. If it's solely human psychology, nicely, let's struggle it: e.g. Mozilla has shown us that they'll do great issues given the right message. If we are facing active people exploiting public credulity: let's identify and struggle them. But, more importantly, let's capitalize on this data and safe *our* techniques, to showcase at a minimal (and extra later on in fact). Your reference conclusion is very good to me. "problem [...] the conventional wisdom and the established order": that job I would happily accept.



Posted Nov 30, 2015 9:39 UTC (Mon) by paulj (subscriber, #341) [Hyperlink]



That rant is itself a bunch of "empty calories". The converse to the objects it rants about, which it's suggesting at some level, would be as unhealthy or worse, and indicative of the worst kind of safety pondering that has put lots of people off. Alternatively, it's just a rant that provides little of value. Personally, I feel there is not any magic bullet. Safety is and always has been, in human history, an arms race between defenders and attackers, and one that's inherently a commerce-off between usability, dangers and prices. If there are mistakes being made, it's that we should always in all probability spend more sources on defences that would block whole courses of assaults. E.g., why is the GRSec kernel hardening stuff so hard to use to common distros (e.g. there is no reliable source of a GRSec kernel for Fedora or RHEL, is there?). Why does all the Linux kernel run in one security context? Why are we nonetheless writing a number of software in C/C++, often without any basic security-checking abstractions (e.g. basic bounds-checking layers in between I/O and parsing layers, say)? Can hardware do extra to provide safety with pace? Little question there are lots of people working on "block courses of attacks" stuff, the question is, why aren't there extra resources directed there?



Posted Nov 10, 2015 2:06 UTC (Tue) by timrichardson (subscriber, #72836) [Hyperlink]



>There are quite a lot of the reason why Linux lags behind in defensive security applied sciences, however considered one of the key ones is that the companies creating wealth on Linux have not prioritized the development and integration of these applied sciences. This looks as if a motive which is admittedly worth exploring. Why is it so? I feel it isn't apparent why this does not get some extra consideration. Is it doable that the individuals with the money are right to not more highly prioritise this? Afterall, what curiosity have they got in an unsecure, exploitable kernel? The place there is frequent trigger, linux development gets resourced. It's been this fashion for a few years. If filesystems qualify for widespread interest, surely security does. So there doesn't seem to be any apparent motive why this issue does not get extra mainstream consideration, except that it really already gets sufficient. You might say that catastrophe has not struck yet, that the iceberg has not been hit. Nevertheless it seems to be that the linux growth process just isn't overly reactive elsewhere.



Posted Nov 10, 2015 15:53 UTC (Tue) by raven667 (subscriber, #5198) [Hyperlink]



That is an fascinating question, actually that's what they actually believe no matter what they publicly say about their commitment to safety technologies. What is the truly demonstrated downside for Kernel builders and the organizations that pay them, as far as I can tell there is not ample consequence for the lack of Safety to drive more funding, so we are left begging and cajoling unconvincingly.



Posted Nov 12, 2015 14:37 UTC (Thu) by ortalo (visitor, #4654) [Hyperlink]



The important thing problem with this domain is it relates to malicious faults. So, when consequences manifest themselves, it is too late to act. And if the present commitment to an absence of voluntary technique persists, we are going to oscillate between phases of relaxed inconscience and anxious paranoia. Admittedly, kernel developpers seem fairly resistant to paranoia. That is an effective thing. But I'm waiting for the times the place armed land-drones patrol US streets within the vicinity of their children schools for them to find the feeling. They are not so distants the days when innocent lives will unconsciouly depend on the security of (linux-primarily based) computer programs; below water, that's already the case if I remember correctly my last dive, as well as in a number of current automobiles according to some studies.



Posted Nov 12, 2015 14:32 UTC (Thu) by MarcB (subscriber, #101804) [Link]



Basic hosting corporations that use Linux as an exposed entrance-finish system are retreating from development while HPC, mobile and "generic enterprise", i.E. RHEL/SLES, are pushing the kernel in their directions. This is admittedly not that surprising: For hosting wants the kernel has been "completed" for fairly a while now. Moreover support for present hardware there isn't much use for newer kernels. Linux 3.2, and even older, works simply fantastic. Hosting does not want scalability to a whole bunch or 1000's of CPU cores (one makes use of commodity hardware), advanced instrumentation like perf or tracing (methods are locked down as much as possible) or superior power-administration (if the system does not have constant excessive load, it isn't making enough cash). So why ought to hosting companies still make sturdy investments in kernel growth? Even when that they had something to contribute, the hurdles for contribution have turn into larger and better. For his or her security needs, hosting corporations already use Grsecurity. I don't have any numbers, but some experience means that Grsecurity is mainly a fixed requirement for shared internet hosting. However, kernel security is nearly irrelevant on nodes of a brilliant laptop or on a system working giant business databases which are wrapped in layers of center-ware. And cellular vendors merely don't care.



Posted Nov 10, 2015 4:18 UTC (Tue) by bronson (subscriber, #4806) [Hyperlink]



Linking



Posted Nov 10, 2015 13:15 UTC (Tue) by corbet (editor, #1) [Link]



Posted Nov 11, 2015 22:38 UTC (Wed) by rickmoen (subscriber, #6943) [Hyperlink]



The assembled doubtless recall that in August 2011, kernel.org was root compromised. I'm positive the system's onerous drives had been despatched off for forensic examination, and we've all been waiting patiently for the reply to a very powerful query: What was the compromise vector? From shortly after the compromise was found on August 28, 2011, proper through April 1st, 2013, kernel.org included this be aware at the highest of the positioning Information: 'Thanks to all in your patience and understanding throughout our outage and please bear with us as we carry up the different kernel.org techniques over the following few weeks. We will likely be writing up a report on the incident sooner or later.' (Emphasis added.) That remark was eliminated (along with the remainder of the site Information) throughout a May 2013 edit, and there hasn't been -- to my knowledge -- a peep about any report on the incident since then. This has been disappointing. When the Debian Venture discovered sudden compromise of several of its servers in 2007, Wichert Akkerman wrote and posted an excellent public report on exactly what happened. Likewise, the Apache Foundation likewise did the best thing with good public autopsies of the 2010 Internet site breaches. Arstechnica's Dan Goodin was nonetheless attempting to observe up on the lack of an autopsy on the kernel.org meltdown -- in 2013. Two years ago. He wrote: Linux developer and maintainer Greg Kroah-Hartman told Ars that the investigation has yet to be completed and gave no timetable for when a report is likely to be launched. [...] Kroah-Hartman additionally instructed Ars kernel.org techniques had been rebuilt from scratch following the attack. Officials have developed new instruments and procedures since then, however he declined to say what they are. "There will be a report later this 12 months about site [sic] has been engineered, but don't quote me on when it will likely be launched as I am not responsible for it," he wrote. Who's accountable, then? Is anyone? Anybody? Bueller? Or is it a state secret, or what? Two years since Greg Okay-H stated there could be a report 'later this yr', and four years since the meltdown, nothing yet. How about some information? Rick Moen [email protected]



Posted Nov 12, 2015 14:19 UTC (Thu) by ortalo (guest, #4654) [Link]



Much less critically, word that if even the Linux mafia doesn't know, it have to be the venusians; they are notoriously stealth in their invasions.



Posted Nov 14, 2015 12:Forty six UTC (Sat) by error27 (subscriber, #8346) [Link]



I know the kernel.org admins have given talks about some of the new protections that have been put into place. There are not any extra shell logins, as a substitute everything makes use of gitolite. The different services are on different hosts. There are more kernel.org employees now. Persons are using two issue identification. Some other stuff. Do a seek for Konstantin Ryabitsev.



Posted Nov 14, 2015 15:Fifty eight UTC (Sat) by rickmoen (subscriber, #6943) [Hyperlink]



I beg your pardon if I was one way or the other unclear: That was mentioned to have been the trail of entry to the machine (and i can readily believe that, as it was also the exact path to entry into shells.sourceforge.internet, many years prior, round 2002, and into many other shared Internet hosts for many years). But that is not what's of primary interest, and is not what the forensic study long promised would primarily concern: How did intruders escalate to root. To quote kernel.org administrator within the August 2011 Dan Goodin article you cited: 'How they managed to use that to root access is at present unknown and is being investigated'. Okay, people, you've now had 4 years of investigation. What was the trail of escalation to root? (Additionally, other details that may logically be coated by a forensic research, reminiscent of: Whose key was stolen? Who stole the key?) This is the type of autopsy was promised prominently on the front web page of kernel.org, to reporters, and elsewhere for a long time (after which summarily removed as a promise from the entrance page of kernel.org, without remark, together with the rest of the site Information section, and apparently dropped). It still could be appropriate to know and share that knowledge. Especially the datum of whether or not the path to root privilege was or was not a kernel bug (and, if not, what it was). Rick Moen [email protected]



Posted Nov 22, 2015 12:42 UTC (Sun) by rickmoen (subscriber, #6943) [Link]



I've achieved a closer assessment of revelations that came out quickly after the break-in, and assume I've discovered the reply, through a leaked copy of kernel.org chief sysadmin John H. 'Warthog9' Hawley's Aug. 29, 2011 e-mail to shell users (two days before the public was informed), plus Aug. 31st comments to The Register's Dan Goodin by 'two safety researchers who had been briefed on the breach': Root escalation was through exploit of a Linux kernel security gap: Per the 2 security researchers, it was one each extraordinarily embarrassing (wide-open entry to /dev/mem contents together with the running kernel's image in RAM, in 2.6 kernels of that day) and identified-exploitable for the prior six years by canned 'sploits, certainly one of which (Phalanx) was run by some script kiddie after entry using stolen dev credentials. Different tidbits: - Site admins left the root-compromised Web servers running with all services still lit up, for a number of days. - Site admins and Linux Foundation sat on the knowledge and failed to inform the public for those same multiple days. - Site admins and Linux Foundation have never revealed whether or not trojaned Linux supply tarballs had been posted in the http/ftp tree for the 19+ days before they took the site down. (Yes, git checkout was effective, however what in regards to the 1000's of tarball downloads?) - After promising a report for a number of years and then quietly eradicating that promise from the front page of kernel.org, Linux Basis now stonewalls press queries.I posted my finest try at reconstructing the story, absent an actual report from insiders, to SVLUG's principal mailing list yesterday. (Necessarily, there are surmises. If the people with the information have been more forthcoming, we might know what occurred for sure.) I do should marvel: If there's one other embarrassing screwup, will we even be advised about it in any respect? Rick Moen [email protected]



Posted Nov 22, 2015 14:25 UTC (Solar) by spender (visitor, #23067) [Hyperlink]



Also, it is preferable to use live memory acquisition previous to powering off the system, otherwise you lose out on memory-resident artifacts which you can carry out forensics on. -Brad



How concerning the long overdue autopsy on the August 2011 kernel.org compromise?



Posted Nov 22, 2015 16:28 UTC (Sun) by rickmoen (subscriber, #6943) [Hyperlink]



Thanks on your feedback, Brad. I'd been relying on Dan Goodin's claim of Phalanx being what was used to realize root, in the bit where he cited 'two safety researchers who have been briefed on the breach' to that impact. Goodin also elaborated: 'Fellow security researcher Dan Rosenberg mentioned he was also briefed that the attackers used Phalanx to compromise the kernel.org machines.' This was the first time I've heard of a rootkit being claimed to be bundled with an assault device, and i famous that oddity in my posting to SVLUG. That having been stated, yeah, the Phalanx README doesn't specifically declare this, so then possibly Goodin and his several 'safety researcher' sources blew that detail, and no person but kernel.org insiders but is aware of the escalation path used to realize root. Additionally, it's preferable to make use of live reminiscence acquisition previous to powering off the system, otherwise you lose out on reminiscence-resident artifacts that you would be able to perform forensics on. Arguable, but a tradeoff; you possibly can poke the compromised reside system for state data, however with the disadvantage of leaving your system working below hostile control. I was all the time taught that, on balance, it's higher to pull energy to finish the intrusion. Rick Moen [email protected]



Posted Nov 20, 2015 8:23 UTC (Fri) by toyotabedzrock (guest, #88005) [Link]



Posted Nov 20, 2015 9:31 UTC (Fri) by gioele (subscriber, #61675) [Link]



With "something" you imply those that produce those closed supply drivers, right? If the "client product firms" simply caught to using parts with mainlined open source drivers, then updating their merchandise could be much easier.



A new Mindcraft second?



Posted Nov 20, 2015 11:29 UTC (Fri) by Wol (subscriber, #4433) [Hyperlink]



They've ring zero privilege, can access protected memory directly, and cannot be audited. Trick a kernel into running a compromised module and it's recreation over. Even tickle a bug in a "good" module, and it's in all probability sport over - on this case quite actually as such modules are typically video drivers optimised for games ...