Hacker Newsnew | past | comments | ask | show | jobs | submit | _oq9t's commentslogin

Any timeline for Python3 support?


Since Flash update is now bundled with Windows Updates it means that Edge users will be using vulnerable Flash for one more month, wow :/


The "Disable Falsh" button is under Advanced Settings on Edge. Switched it off and I barely notice anything is missing these days.


True, but for the 90% of users of Edge that aren't technical, going into advanced settings and disabling flash is probably beyond their abilities.


wasn't edge originally supposed to be updated via store?


Flash still exists?

I used to avoid annoyances by not having Flash. Now, thanks to the hard work of WHATWG on HTML5, I'm scrod.



Chrome uses Blink which was forked from WebKit long ago, see https://techcrunch.com/2013/04/03/google-forks-webkit-and-la...


Not all cats eat just what they need, one of our cats eat twice the amount of what he needs if he finds food available. That's why this kind of automated feeders makes sense.


I sincerely hope Red Hat continues to develop Cygwin further. It's amazing to be able to run my favourite shell on Windows.


This is part of llvm trunk (upcoming 3.9 release) now: http://llvm.org/docs/CompileCudaWithLLVM.html


Thanks for the link! Pretty exciting stuff.

Can anyone comment on the following quote:

The list below shows some of the more important optimizations for GPUs... A few of them have not been upstreamed due to lack of a customizable target-independent optimization pipeline.

So the LLVM version of gpucc will be incomplete? Will there be a release of the original stand-alone gpucc?


Thanks for your interest, and hope you like it!

Yes, it is currently incomplete, but I'd say at least 80% of the optimizations are upstreamed already. Also, folks in the LLVM community are actively working on that. For example, Justin Lebar recently pushed http://reviews.llvm.org/D18626 that added the speculative execution pass to -O3.

Regarding performance, one thing worth noting is that missing one optimization does not necessarily cause significant slowdown on the benchmarks you care about. For example, the memory-space alias analysis only noticeably affects one benchmark in the Rodinia benchmark suite.

Regarding your second question, the short answer is no. The Clang/LLVM version uses a different architecture (as mentioned in http://wujingyue.com/docs/gpucc-talk.pdf) from the internal version. The LLVM version offers better functionality and compilation time, and is much easier to maintain and improve in the future. It would cost even more effort to upstream the internal version than to make all optimizations work with the new architecture.


In fact I think at the moment almost everything, other than the memory-space alias analysis and a few pass tuning tweaks, is in. I know the former will be difficult to land, and I suspect the latter may be as well.

I don't have a lot of benchmarks at the moment, so I can't say how important they are. And it of course depends on what you're doing.

clang/llvm's CUDA implementation shares most of the backend with gpucc, but it's an entirely new front-end. The front-end works for tensorflow, eigen, and thrust, but I suspect if you try hard enough you'll be able to find something nvcc accepts that we can't compile. At the moment we're pretty focused on making it work well for Tensorflow.


Thanks for the clarification! It's always a pleasure to get a direct response from the first author on something as awesome as this.

I'm definitely subscribing to the llvm-dev list[1] in case any discussion on this continues there. There's also the llvm-commits, clang-dev, and clang-commits lists as well, but llvm-dev kinda seems like the right place for this.

Gpucc in LLVM is definitely a breath of fresh air for all of us nvcc users. To get to see some compiler internals for cuda, it feels like Christmas. A big thanks from me for all the upstreaming effort!

1: http://lists.llvm.org/mailman/listinfo/llvm-dev


Looking forward to a CUDA Fortran frontend for this. Does it exist already?


No idea, but I do know that the PGI group has had a working CUDA Fortran compiler since 2013:

http://www.pgroup.com/doc/pgicudaforug.pdf

One could take one's Fortran code and simply recompile it with their compiler to run on the Nvidia GPU's. The compiler would perform automatic parallelization. Wild stuff.


I'm aware of that, it's the main GPU compiler I'm using currently. But I have to say, PGI only has limited resources and it would be very cool if there'd be a second player in town, especially if it's one of the big five.

Btw. I'm working on something that's geared towards pretty much exactly what you're talking about. My stretch goal is fully automatic GPU parallelization for data parallel Fortran code [1].

[1] https://github.com/muellermichel/Hybrid-Fortran


If only it didn't still need the proprietary CUDA SDK.


That is a very valid concern and a key motivation for the proposed StreamExecutor project (http://lists.llvm.org/pipermail/llvm-dev/2016-March/096576.h...).


We have two cats and pondered about this long time ago. The best resource we found was the BBC documentary: http://www.bbc.co.uk/programmes/b04lcqvq also has an e-book http://www.bbc.co.uk/programmes/articles/4Hbdn6T21hKDH6bfVBk...


Great, would be nice to have and updated clang (seems to be using 3.5.1) and python3 (using 3.4.3). Generally its nice to point out the version of the compiler it's using somewhere as a note.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: