All times are UTC-06:00




Post new topic  Reply to topic  [ 23 posts ] 
Author Message
PostPosted: Thu May 20, 2010 9:24 am 
Offline

Joined: Wed Oct 13, 2004 7:26 am
Posts: 348
The story is here:

http://imxcommunity.org/profiles/blogs/ ... ith-hardfp

I'll be making regular posts on this blog, so stay tuned :D


Top
   
 Post subject:
PostPosted: Thu May 20, 2010 7:05 pm 
Offline

Joined: Tue Mar 31, 2009 10:24 pm
Posts: 171
good job, Konstantinos! we're slowly but steady getting there*

off-topic: the comment editor on imxcommunity.org is bare bones - no preview, no editing. so i bear no responsibility for any incomprehensible gibberish i end up posting there ; )


* 'there' being a functional 3d graphics development station ;p


Top
   
 Post subject:
PostPosted: Tue May 25, 2010 4:39 am 
Offline

Joined: Wed Oct 13, 2004 7:26 am
Posts: 348
Another update:

2722 packages built so far,

seeds ubuntu-minimal, ubuntu-standard are ready, ubuntu-desktop needs ~95 packages (as of right now) to be ready. In the meantime I'm modifying rootstock to work natively (right now it works on x86 and uses qemu) and then I'll provide 3 SD cards images and tarballs, one for each seed.


Top
   
 Post subject:
PostPosted: Tue May 25, 2010 1:15 pm 
Offline

Joined: Thu Jul 28, 2005 12:41 am
Posts: 1066
Do you already have any performance data?

_________________
CzP
http://czanik.blogs.balabit.com/


Top
   
 Post subject:
PostPosted: Wed May 26, 2010 10:23 am 
Offline

Joined: Wed Oct 13, 2004 7:26 am
Posts: 348
Quote:
Do you already have any performance data?
very very preliminary:
(softfp)
$ ./bench_gemm.gcc4.4.1cs+genesi.neon
eigen cpu 2.37s 0.906111 GFLOPS (11.9s)
eigen real 2.36472s 0.908134 GFLOPS (11.9004s)

(hardfp)
$ ./bench_gemm.gcc4.4.1cs+genesi.neon.hard
eigen cpu 2.35s 0.913823 GFLOPS (11.85s)
eigen real 2.35611s 0.911453 GFLOPS (11.8477s)

mind you, performance should get better the more function calls we have. Eigen is a special case in that it doesn't really call that many functions, most thigns are inlined.


Top
   
PostPosted: Thu Jun 10, 2010 11:22 am 
Offline

Joined: Wed Oct 13, 2004 7:26 am
Posts: 348
Here they are:

http://freevec.org/repository/

(this is temporary, soon they'll be moved to a Genesi server)

The desktop (gnome) image is huge (~1.4GB), I'll upload it later.

Cheers

PS. For those who have noticed the change of my signature, this blog explains it.


Top
   
 Post subject:
PostPosted: Thu Jun 10, 2010 12:11 pm 
Offline
Site Admin

Joined: Fri Sep 24, 2004 1:39 am
Posts: 1589
Location: Austin, TX
Quote:
Quote:
Do you already have any performance data?
very very preliminary:
(softfp)
$ ./bench_gemm.gcc4.4.1cs+genesi.neon
eigen cpu 2.37s 0.906111 GFLOPS (11.9s)
eigen real 2.36472s 0.908134 GFLOPS (11.9004s)

(hardfp)
$ ./bench_gemm.gcc4.4.1cs+genesi.neon.hard
eigen cpu 2.35s 0.913823 GFLOPS (11.85s)
eigen real 2.35611s 0.911453 GFLOPS (11.8477s)

mind you, performance should get better the more function calls we have. Eigen is a special case in that it doesn't really call that many functions, most thigns are inlined.
Any way to get this into a nice scatter plot of multiple benchmark runs so that we can work out the performance and standard deviation and see if the improvement is statistically significant or just completely marginal?

Oh, I just thought.. EEMBC CoreMark should be something to try. It should work fine on the i.MX515 but there are some tweaks to pass FPU arguments in registers or not (MATDATA type can be defined as integer or FPU argument depending on compile for example)

It would be an awesome way to get a real industry embedded performance benchmark - especially tied to performance FPU computing - for real results.

_________________
Matt Sealey


Top
   
 Post subject:
PostPosted: Fri Jun 18, 2010 2:36 pm 
Offline

Joined: Wed Oct 13, 2004 7:26 am
Posts: 348
Quote:
Do you already have any performance data?
Ok,

I ran a simple but famous benchmark for 3D performance, on the default X server (fbdev driver, no 2D/3D acceleration whatsoever). The reason was simple, software 3D is done by the Mesa library and there are lots of floating point arguments passing around in functions (like glRotate(), glTranspose(), etc.). So any gain won by hardfp would be visible. I ran the benchmark on two similar in all means EfikaMXs (TO2), both running Ubuntu karmic 9.10, one with the default pre-installed (softfp) and the second built with hardfp. Both systems were SD-less -which means the OS boots from the flash drive. Both were connected to the same 17" TFT monitor (1280x1024 resolution) via a HDMI-DVI cable.

Both systems were running only one thing and nothing else. So here are the results:

softfp:
$ glxgears
80 frames in 5.0 seconds
119 frames in 5.0 seconds
118 frames in 5.0 seconds
120 frames in 5.0 seconds
118 frames in 5.0 seconds
120 frames in 5.0 seconds
119 frames in 5.0 seconds
113 frames in 5.0 seconds
^C

hardfp:
136 frames in 5.0 seconds
144 frames in 5.0 seconds
143 frames in 5.0 seconds
139 frames in 5.0 seconds
141 frames in 5.0 seconds
144 frames in 5.0 seconds
143 frames in 5.0 seconds
144 frames in 5.0 seconds
^C

Taking the best fps we have 120 vs 144, which gives us ~20% better performance using the hardfp approach.

I'll try to find some more benchmarks to run, but as far as I'm concerned, I'm convinced, hardfp is the way to go with these new ARM cpus.

Now if only wanna-build decides to play fair and just work... :)

Stay tuned.


Top
   
 Post subject:
PostPosted: Mon Jun 21, 2010 12:20 am 
Offline

Joined: Thu Jul 28, 2005 12:41 am
Posts: 1066
Wow, very impressive! Especially that graphics is the most resource intensive part of desktop computing.

_________________
CzP
http://czanik.blogs.balabit.com/


Top
   
 Post subject: More benchmarks
PostPosted: Thu Jul 01, 2010 1:52 pm 
Offline

Joined: Wed Oct 13, 2004 7:26 am
Posts: 348
Matt just gave me a url with a very simple benchmark for floating point at:

http://svn.arhuaco.org/svn/src/emqbit/t ... bit-bench/

So, I downloaded it on both the softfp and the hardfp efikas, build and ran it, here are the results of its two binaries:

softfp:
Code:
$ ./bench
nTimes=93750 16: Dot with C code => (flops 90.691986 : time:0.033079 us)
nTimes=61225 16: Distance with C code => (flops 64.782761 : time:0.046309 us)
nTimes=46875 32: Dot with C code => (flops 96.661934 : time:0.031036 us)
nTimes=30928 32: Distance with C code => (flops 72.299995 : time:0.041494 us)
nTimes=23438 64: Dot with C code => (flops 91.778763 : time:0.032688 us)
nTimes=15545 64: Distance with C code => (flops 75.102257 : time:0.039948 us)
nTimes=11719 128: Dot with C code => (flops 101.638512 : time:0.029517 us)
nTimes=7793 128: Distance with C code => (flops 71.021545 : time:0.042245 us)
nTimes=5860 256: Dot with C code => (flops 102.676842 : time:0.029221 us)
nTimes=3902 256: Distance with C code => (flops 76.789795 : time:0.039076 us)
nTimes=2930 512: Dot with C code => (flops 94.328918 : time:0.031807 us)
nTimes=1952 512: Distance with C code => (flops 76.468056 : time:0.039235 us)
nTimes=1465 1024: Dot with C code => (flops 103.355949 : time:0.029029 us)
nTimes=977 1024: Distance with C code => (flops 72.532097 : time:0.041393 us)
nTimes=733 2048: Dot with C code => (flops 102.796173 : time:0.029207 us)
nTimes=489 2048: Distance with C code => (flops 72.398621 : time:0.041505 us)
nTimes=367 4096: Dot with C code => (flops 103.649727 : time:0.029006 us)
nTimes=245 4096: Distance with C code => (flops 76.943657 : time:0.03913 us)
nTimes=184 8192: Dot with C code => (flops 95.036598 : time:0.031721 us)
nTimes=123 8192: Distance with C code => (flops 77.351425 : time:0.039081 us)
nTimes=92 16384: Dot with C code => (flops 103.635597 : time:0.029089 us)
nTimes=62 16384: Distance with C code => (flops 72.119598 : time:0.042256 us)
nTimes=46 32768: Dot with C code => (flops 88.333809 : time:0.034128 us)
nTimes=31 32768: Distance with C code => (flops 77.195709 : time:0.039477 us)
nTimes=23 65536: Dot with C code => (flops 94.034622 : time:0.032059 us)
nTimes=16 65536: Distance with C code => (flops 70.625801 : time:0.044541 us)
16, 90.691986, 64.782761,
32, 96.661934, 72.299995,
64, 91.778763, 75.102257,
128, 101.638512, 71.021545,
256, 102.676842, 76.789795,
512, 94.328918, 76.468056,
1024, 103.355949, 72.532097,
2048, 102.796173, 72.398621,
4096, 103.649727, 76.943657,
8192, 95.036598, 77.351425,
16384, 103.635597, 72.119598,
32768, 88.333809, 77.195709,
65536, 94.034622, 70.625801
hardfp:
Code:
$ ./bench
nTimes=93750 16: Dot with C code => (flops 124.048958 : time:0.024184 us)
nTimes=61225 16: Distance with C code => (flops 79.258804 : time:0.037851 us)
nTimes=46875 32: Dot with C code => (flops 121.045830 : time:0.024784 us)
nTimes=30928 32: Distance with C code => (flops 80.882591 : time:0.037091 us)
nTimes=23438 64: Dot with C code => (flops 116.150993 : time:0.025829 us)
nTimes=15545 64: Distance with C code => (flops 79.433014 : time:0.03777 us)
nTimes=11719 128: Dot with C code => (flops 112.479904 : time:0.026672 us)
nTimes=7793 128: Distance with C code => (flops 84.996887 : time:0.035299 us)
nTimes=5860 256: Dot with C code => (flops 111.366318 : time:0.026941 us)
nTimes=3902 256: Distance with C code => (flops 84.570282 : time:0.035481 us)
nTimes=2930 512: Dot with C code => (flops 111.581688 : time:0.026889 us)
nTimes=1952 512: Distance with C code => (flops 84.589600 : time:0.035468 us)
nTimes=1465 1024: Dot with C code => (flops 112.115395 : time:0.026761 us)
nTimes=977 1024: Distance with C code => (flops 84.379898 : time:0.035581 us)
nTimes=733 2048: Dot with C code => (flops 111.256500 : time:0.026986 us)
nTimes=489 2048: Distance with C code => (flops 85.648865 : time:0.035084 us)
nTimes=367 4096: Dot with C code => (flops 109.561012 : time:0.027441 us)
nTimes=245 4096: Distance with C code => (flops 84.969376 : time:0.035434 us)
nTimes=184 8192: Dot with C code => (flops 111.856926 : time:0.026951 us)
nTimes=123 8192: Distance with C code => (flops 84.636757 : time:0.035717 us)
nTimes=92 16384: Dot with C code => (flops 110.257339 : time:0.027342 us)
nTimes=62 16384: Distance with C code => (flops 84.866913 : time:0.035909 us)
nTimes=46 32768: Dot with C code => (flops 109.671715 : time:0.027488 us)
nTimes=31 32768: Distance with C code => (flops 85.552208 : time:0.035621 us)
nTimes=23 65536: Dot with C code => (flops 108.386276 : time:0.027814 us)
nTimes=16 65536: Distance with C code => (flops 82.243820 : time:0.038249 us)
16, 124.048958, 79.258804,
32, 121.045830, 80.882591,
64, 116.150993, 79.433014,
128, 112.479904, 84.996887,
256, 111.366318, 84.570282,
512, 111.581688, 84.589600,
1024, 112.115395, 84.379898,
2048, 111.256500, 85.648865,
4096, 109.561012, 84.969376,
8192, 111.856926, 84.636757,
16384, 110.257339, 84.866913,
32768, 109.671715, 85.552208,
65536, 108.386276, 82.243820
And the cfft binary:

softfp:
Code:
$ ./cfft
nTimes=6250 N=16: (flops 43.850990 : time:0.045609 us)
nTimes=2500 N=32: (flops 43.096947 : time:0.046407 us)
nTimes=1042 N=64: (flops 45.744595 : time:0.043735 us)
nTimes=447 N=128: (flops 42.186687 : time:0.047469 us)
nTimes=196 N=256: (flops 44.160267 : time:0.045449 us)
nTimes=87 N=512: (flops 42.001507 : time:0.047724 us)
nTimes=40 N=1024: (flops 43.827175 : time:0.046729 us)
nTimes=18 N=2048: (flops 41.382183 : time:0.048995 us)
nTimes=9 N=4096: (flops 42.877579 : time:0.051585 us)
nTimes=4 N=8192: (flops 40.669060 : time:0.052372 us)
nTimes=2 N=16384: (flops 41.244293 : time:0.055614 us)
nTimes=1 N=32768: (flops 39.966824 : time:0.061491 us)
nTimes=1 N=65536: (flops 36.040470 : time:0.145472 us)
16, 43.850990
32, 43.096947
64, 45.744595
128, 42.186687
256, 44.160267
512, 42.001507
1024, 43.827175
2048, 41.382183
4096, 42.877579
8192, 40.669060
16384, 41.244293
32768, 39.966824
65536, 36.040470
hardfp:
Code:
$ ./cfft
nTimes=6250 N=16: (flops 57.763405 : time:0.034624 us)
nTimes=2500 N=32: (flops 58.339657 : time:0.034282 us)
nTimes=1042 N=64: (flops 57.334785 : time:0.034894 us)
nTimes=447 N=128: (flops 56.781216 : time:0.035268 us)
nTimes=196 N=256: (flops 56.472710 : time:0.03554 us)
nTimes=87 N=512: (flops 54.912746 : time:0.036503 us)
nTimes=40 N=1024: (flops 55.002014 : time:0.037235 us)
nTimes=18 N=2048: (flops 54.731277 : time:0.037045 us)
nTimes=9 N=4096: (flops 54.089794 : time:0.040892 us)
nTimes=4 N=8192: (flops 53.270641 : time:0.039983 us)
nTimes=2 N=16384: (flops 40.333393 : time:0.05687 us)
nTimes=1 N=32768: (flops 50.530472 : time:0.048636 us)
nTimes=1 N=65536: (flops 48.851868 : time:0.107322 us)
16, 57.763405
32, 58.339657
64, 57.334785
128, 56.781216
256, 56.472710
512, 54.912746
1024, 55.002014
2048, 54.731277
4096, 54.089794
8192, 53.270641
16384, 40.333393
32768, 50.530472
65536, 48.851868
That's ~30% speed gain just from a simple recompile! I knew the system actually *felt* faster, but now the numbers prove it once again. Still working on setting up the compile farm -lots of trouble with wanna-build, but I think I've found a solution, stay tuned!


Top
   
 Post subject: Re: More benchmarks
PostPosted: Thu Jul 01, 2010 2:00 pm 
Offline
Site Admin

Joined: Fri Sep 24, 2004 1:39 am
Posts: 1589
Location: Austin, TX
Quote:
Matt just gave me a url with a very simple benchmark for floating point at:

http://svn.arhuaco.org/svn/src/emqbit/t ... bit-bench/
Actually I got that URL from here:

http://www.linuxfordevices.com/c/a/Linu ... I-matters/

Which explains the basic benefit in plain terms all of 3 years ago. Which begs the question, if there is all this benefit, why hasn't anyone bit the bullet and started making hardfp distributions before now?

It is not like the Cortex-A8 is the first to have hardware floating point, and before that with older VFP versions (VFPv1 and VFPv2 etc.) and other chipsets using FPA and iwMMXt, the FPU units were not all that great - padding 20% of the performance with pipeline stalls and register copies between every FP function just makes it all the worse. Granted, if the FPU is slow, that padding is less noticable.

As you get to Cortex-A9 with it's new VFP and NEON units which finally have single-cycle issue for every instruction, the performance hit of the pipeline stall for register copy is going to be extremely noticable.

The big question is, do you make all distributions rely on VFPv3 (with 16 or 32 registers?) and then only support NEON as a detectable feature, or just VFPv3-d16 to support the lower performance chips as well as the bigger ones at the cost of performance on the bigger ones (to the degree that x86 is slower than amd64, basically, which is not MUCH, but it is noticable).

BTW you say the systems feel faster, you have a desktop here? Is it running a lot of FPU code for instance for graphics, do you think?

_________________
Matt Sealey


Top
   
 Post subject: Re: More benchmarks
PostPosted: Thu Jul 01, 2010 2:10 pm 
Offline

Joined: Wed Oct 13, 2004 7:26 am
Posts: 348
Quote:
Quote:
Matt just gave me a url with a very simple benchmark for floating point at:

http://svn.arhuaco.org/svn/src/emqbit/t ... bit-bench/
Actually I got that URL from here:

http://www.linuxfordevices.com/c/a/Linu ... I-matters/

Which explains the basic benefit in plain terms all of 3 years ago. Which begs the question, if there is all this benefit, why hasn't anyone bit the bullet and started making hardfp distributions before now?
I had the same question.
Quote:
It is not like the Cortex-A8 is the first to have hardware floating point, and before that with older VFP versions (VFPv1 and VFPv2 etc.) and other chipsets using FPA and iwMMXt, the FPU units were not all that great - padding 20% of the performance with pipeline stalls and register copies between every FP function just makes it all the worse. Granted, if the FPU is slow, that padding is less noticable.

As you get to Cortex-A9 with it's new VFP and NEON units which finally have single-cycle issue for every instruction, the performance hit of the pipeline stall for register copy is going to be extremely noticable.
Well, I dont' have access to A9 so I can't really say. :)
Quote:
The big question is, do you make all distributions rely on VFPv3 (with 16 or 32 registers?) and then only support NEON as a detectable feature, or just VFPv3-d16 to support the lower performance chips as well as the bigger ones at the cost of performance on the bigger ones (to the degree that x86 is slower than amd64, basically, which is not MUCH, but it is noticable).
It depends what's your target, really. Once a hadrfp vfpv3 -which is what I use right now- repo is created there's no stopping us building a second repo with hardfp/neon, and build only selected packages in the second one and use it
with a higher priority over the default repo, so that neon packages get preferred on neon systems.
Quote:
BTW you say the systems feel faster, you have a desktop here? Is it running a lot of FPU code for instance for graphics, do you think?
Yeah, I have a complete Gnome desktop (I built ~4k packages here, incl. GNOME and basic KDE :). The system is almost an exact replica of the default installation image. I don't know how much the fpu is used in the desktop, but it's definitely used in svg icons for example. Or JPEG background loading, there are some many things that use the fpu, it's difficult to measure exactly which compoment is so much faster, but it does feel faster than the default installation.


Top
   
 Post subject: Re: More benchmarks
PostPosted: Fri Jul 02, 2010 12:08 pm 
Offline
Site Admin

Joined: Fri Sep 24, 2004 1:39 am
Posts: 1589
Location: Austin, TX
Quote:
It depends what's your target, really. Once a hadrfp vfpv3 -which is what I use right now- repo is created there's no stopping us building a second repo with hardfp/neon, and build only selected packages in the second one and use it
with a higher priority over the default repo, so that neon packages get preferred on neon systems.
That's a very good point. I guess we need a survey of CPUs in general use which have which FPU and SIMD units so the repos can be determined and overlayed. Basically you could pick out (ARM11) ARMv6 with VFPv3 as the base, is my understanding.

So far any Cortex SoC for high end use (as it relates to ARM this is anything up to about a Smartbook) including OMAP3, OMAP4, Freescale iMX31, iMX51, Qualcomm Snapdragon and a hell of a lot of other devices would be supported by this system.

Thumb2/ThumbEE needs to be a targeted format too, so that Cortex-R and Cortex-M processors can be supported. This would be an add-on repository for most people but the default for those processors which only support Thumb code.

Anything lower - ARM9 (armv5) processors like iMX21/27, older OMAP, and a lot of other devices may as well use standard softfp. Marvell processors would be left out and be stuck with this.

_________________
Matt Sealey


Top
   
 Post subject:
PostPosted: Wed May 04, 2011 4:12 pm 
Offline

Joined: Sun May 01, 2011 6:12 pm
Posts: 42
Location: Denmark
Interesting looking benchmarks!

I'm wondering if I could get someone to do some benchmarks with the Phoronix Test Suite (http://www.phoronix-test-suite.com/) on Efika MX hardware.
Phoronix Test Suite is an application that is able to perform benchmarks on a system - for example 7-Zip compression speed, FLAC encoding speed, OpenSSL encryption performance - and compare these results directly with other systems by uploading them to OpenBenchmarking.org.
The available tests are here: http://openbenchmarking.org/tests/pts and it's possible to create custom tests.

So, for example, some of the benchmark results for the GZip test profile are here: http://openbenchmarking.org/test/pts/compress-gzip
To perform this specific test on a system that has Phoronix Test Suite installed, run the command:
Code:
phoronix-test-suite benchmark pts/compress-gzip
The benchmark is performed and the results are uploaded to openbenchmarking.org. One can then compare the generated results with, for example, the results of a system with the two-core AMD Fusion CPU, the AMD E-350 @ 1.6 GHz: http://openbenchmarking.org/result/1103 ... ZIP9777969

It is also possible to find any test you like, let's say this one: http://openbenchmarking.org/result/1103 ... OMPRESS794, perform the tests that this page has benchmarks for, and upload the results to the page to compare your system and this system directly. Just go down the page and click "Compare Performance" and run the command listed there:
Code:
phoronix-test-suite benchmark 1103067-IV-COMPRESS794
The same benchmarks that this results-page contains are performed on your system and uploaded to this results page for direct comparison.

It requires that the system in question has PHP5 CLI installed (php5-cli on Ubuntu) and of course the executable that is being tested in a specific benchmark, eg. 7za, gzip, flac, openssl.

I'm very curious to see the performance compared between the x86 and ARM architecture specifically. Of course I know that ARM will score lower, but how much lower I'm just really curious to find out.


Top
   
PostPosted: Wed May 04, 2011 5:05 pm 
Offline

Joined: Wed Jul 01, 2009 4:35 pm
Posts: 94
Location: Italy
I have already done the Phoronix Test on Efika Mx, with default Genesi Ubuntu 10.10, for an article that go out in Italy on Linux & C. magazine.
Quote:
Interesting looking benchmarks!

I'm wondering if I could get someone to do some benchmarks with the Phoronix Test Suite (http://www.phoronix-test-suite.com/) on Efika MX hardware.
Phoronix Test Suite is an application that is able to perform benchmarks on a system - for example 7-Zip compression speed, FLAC encoding speed, OpenSSL encryption performance - and compare these results directly with other systems by uploading them to OpenBenchmarking.org.
The available tests are here: http://openbenchmarking.org/tests/pts and it's possible to create custom tests.

So, for example, some of the benchmark results for the GZip test profile are here: http://openbenchmarking.org/test/pts/compress-gzip
To perform this specific test on a system that has Phoronix Test Suite installed, run the command:
Code:
phoronix-test-suite benchmark pts/compress-gzip
The benchmark is performed and the results are uploaded to openbenchmarking.org. One can then compare the generated results with, for example, the results of a system with the two-core AMD Fusion CPU, the AMD E-350 @ 1.6 GHz: http://openbenchmarking.org/result/1103 ... ZIP9777969

It is also possible to find any test you like, let's say this one: http://openbenchmarking.org/result/1103 ... OMPRESS794, perform the tests that this page has benchmarks for, and upload the results to the page to compare your system and this system directly. Just go down the page and click "Compare Performance" and run the command listed there:
Code:
phoronix-test-suite benchmark 1103067-IV-COMPRESS794
The same benchmarks that this results-page contains are performed on your system and uploaded to this results page for direct comparison.

It requires that the system in question has PHP5 CLI installed (php5-cli on Ubuntu) and of course the executable that is being tested in a specific benchmark, eg. 7za, gzip, flac, openssl.

I'm very curious to see the performance compared between the x86 and ARM architecture specifically. Of course I know that ARM will score lower, but how much lower I'm just really curious to find out.

_________________
http://deliriotecnologico.blogspot.com


Top
   
Display posts from previous:  Sort by  
Post new topic  Reply to topic  [ 23 posts ] 

All times are UTC-06:00


Who is online

Users browsing this forum: No registered users and 4 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum

Search for:
Jump to:  
PowerDeveloper.org: Copyright © 2004-2012, Genesi USA, Inc. The Power Architecture and Power.org wordmarks and the Power and Power.org logos and related marks are trademarks and service marks licensed by Power.org.
All other names and trademarks used are property of their respective owners. Privacy Policy
Powered by phpBB® Forum Software © phpBB Group