Quantcast
Channel: Test – Geeks3D
Viewing all 52 articles
Browse latest View live

Windows 8 Developer Preview, VirtualBox (Quick Test)

$
0
0

Windows 8 developer preview



Like many other developers and geeks (according to what I read on twitter yesterday), I tested the developer preview of Windows 8. But I only wanted to do a quick and dirty test. Then I installed Win8 in VirtualBox:
  • 1 – Download the 4.5GB iso from HERE.
  • 2 – Mount this iso on a virtual DVD player with Daemon Tools.
  • 3 – Create a virtual machine (I selected a Win7 64-bit system, with 40GB of HDD) with VirtualBox, I called it Win8Box.
  • 4 – Start the virtual machine and select the virtual DVD player to launch Win8 installation.

The Win8 dev preview is a 64-bit version only with developers tools (Windows SDK for Metro style apps, Microsoft Visual Studio 11 Express for Windows Developer Preview). DirectX 11.1 SDK is also present…

Once the installation is completed, Win8 looks like as follows:

Windows 8 developer preview, MSI Kombustor, D3D9



There is a horizontal slider to move the scren left to right. Win8 interface is designed for tablets! To get the usual desktop, you have to click on the small desktop icon. An there, we have the Win7-like desktop. Win8 comes with a new task-manager:

Windows 8 developer preview, MSI Kombustor, D3D9



Win8 is compatible with Win7 apps:

On x86 and x64 PCs, Windows 8 supports Windows 7 desktop applications and devices so you don’t have to compromise or give up what you’re used to. On these PCs, your existing Windows 7-based applications just work.

Okay with VirtualBox, you don’t have access to OpenGL stuff, and Direct3D support is limited, but D3D9 seems to work. I successfully launched MSI Kombustor with the Direct3D 9 render path:

Windows 8 developer preview, MSI Kombustor, D3D9



Here are various links about Win8:


(Tested) cRARk, OpenCL Password Cracker for RAR files (***Updated***)

$
0
0

Radeon cards, the perfect cracking hardware


UPDATE: cRARk author just contacted me about the problem on NVIDIA hardware:

As for fail on NVIDIA cards, it’s so simple in it’s written in the FAQ – you need to run driver-timeout.reg and reboot. (RAR kernel is so long and Windows can’t wait so much ;)

Indeed it’s simple when you know it :D


cRARk is a free password cracker (or recovery tool ;) ) for RAR archives. One interesting feature is the support of OpenCL for accelerating the cracking (for AMD Radeon and NVIDIA GeForce cards). CUDA is also supported for NVIDIA cards.

cRARk is a command line tool and is very easy to use. If the password is not too long (less than 6 characters) you have many chances to find it with cRARk.

I tested the tool with a simple RAR file with the following password: toto. I first tested on a GeForce GTX 580 but without success. No matter the GPU computing API used (OpenCL or CUDA), there was still an error:

cRARk, CUDA, GeForce  GTX 580
CUDA version

cRARk, OpenCL, GeForce  GTX 580
OpenCL version



On a Radeon HD 6970 (with Catalyst 11.10 Preview 3), on the other hand, the OpenCL version of cRARk worked perfectly fine! Then I cracked my simple test file in 39 seconds. To find the password toto, cRARk has tested 411489 passwords at the rate of 10438 passwords / second.

cRARk, OpenCL, Radeon HD 6970
OpenCL version + HD 6970



The second test I did with the Radeon HD 6970 is to crack another RAR file, this time with the following password: t4t4:

cRARk, OpenCL, Radeon HD 6970
OpenCL version + HD 6970

The time to find the password was a bit longer (1min 37sec) but the tool did it!

This test confirms that AMD Radeon cards are still the hardware of choice for crackers (see this article: Radeon HD 5970: the Ultimate Password Cracking Hardware?)!

Where can you find cRARk? Really simple, just visit this page: cRARk: Fastest utility to crack RAR password.

Asus Eee Pad Transformer, Quick WebGL Test

$
0
0

ASUS Eee Pad Transformer


I quickly played with ASUS’s Eee Pad Transformer TF101, the one based on NVIDIA’s Tegra 2 processor (dual core CPU). The operating system is Android 3.2.1 (kernel 2.6.36.3 android@Mercury #1).

ASUS Eee Pad Transformer



Actually what I wanted to test is the 3D side and especially WebGL. With online WebGL tools like Shader Toy or GLSL Sandbox, you can now code your pixel shaders nearly from everywhere…

First thing, you must install a browser that supports WebGL because the default one does not support it:

ASUS Eee Pad Transformer



I installed Firefox (I really don’t appreciate all these OSes that require your account (here gmail) just to install an app like firefox or the flash player!). Okay, let’s forget that point and let’s see the cool stuff, I mean WebGL.

I first tested this WebGL blob demo:

ASUS Eee Pad Transformer, WebGL test

Sounds cool, but if you look a closer bit, the rendering is pixelated, and the framerate is very very low (few FPS):

ASUS Eee Pad Transformer, WebGL test

Okay let’s try to modify the pixel shader :

ASUS Eee Pad Transformer, WebGL test

Ooops! Let’s try vertically:

ASUS Eee Pad Transformer, WebGL test


It’s a bit better. But without arrow keys, it’s really hard to position the mouse cursor on a particular line of code, like changing the values of a vec4. No enough precision! I guess you have to forget live coding at the train station unless you have the keyboard dock:D



Second test, this pixel shader:

ASUS Eee Pad Transformer, WebGL test




Obviously, someone, among Tegra 2 or Firefox for Android, does not like raymarching. The correct rendering with Firefox / Win7 is:

ASUS Eee Pad Transformer, WebGL test, GLSL Sandbox



After this first test, I launched Shader Toy in oder to have some framerate numbers. Here is the default demo of Shader Toy:

ASUS Eee Pad Transformer, WebGL test



And here is the FPS:

ASUS Eee Pad Transformer, WebGL test




On a real PC, you easily reach 200 FPS or more. Same thing for the Plasma demo:

ASUS Eee Pad Transformer, WebGL test

ASUS Eee Pad Transformer, WebGL test



Conclusion: ouf, the test is over! I’m happy to return on my PC for serious coding ;) I’m curious to repeat these WebGL tests with ASUS’s Prime…




Linux: Mesa, Gallium3D, Nouveau and NVIDIA Drivers, OpenGL Test (GTX 280, GTX 480, GTX 580)

$
0
0

Dreamlinux, OpenGL test



In my recent programming sessions under Linux, I used NVIDIA proprietary and closed source drivers (64-bit version, it was under Mint10 64-bit) without too much difference with Windows ones. Few days ago, I decided to test Dreamlinux 5.0, a 32-bit Linux distro based on Debian 7 Wheezy. Dreamlinux comes with Nouveau driver (NouVeau?), the open source driver for NVIDIA cards. That was a nice opportunity to play for the first time with this famous driver and above all to clarify some words like Gallium3D, llvmpipe, Mesa3D or REnouveau…

On the top, there is Mesa3D:

Mesa is an open-source implementation of the OpenGL specification – a system for rendering interactive 3D graphics.

A variety of device drivers allows Mesa to be used in many different environments ranging from software emulation to complete hardware acceleration for modern GPUs.



To talk with our GPUs, Mesa uses an interface called Gallium3D (which has remplaced the old DRIDirect Rendering Infrastructure– driver interface, also called Mesa Classic Driver Model). Gallium3D is a redesign of Mesa’s device driver model:

Gallium3D is a new architecture for building 3D graphics drivers. Initially supporting Mesa and Linux graphics drivers, Gallium3D is designed to allow portability to all major operating systems and graphics interfaces.



The Nouveau driver is a specific implementation (GPU Specific Driver) of the Gallium3D interface:

Gallium3D Architecture



Nouveau driver is not available for all GeForce, especially the latest ones (GTX 500 series). Nouveau’s developers team must reverse NVIDIA proprietary drivers (by analyzing memory changes, see REnouveau project –Reverse Engineering for nouveau) and this is a really tough work!

When an accelerated renderer is not available like for the GeForce GTX 580, a software renderer (or rasterizer) is used. Softpipe is the reference driver for Gallium3D but it’s slow. LLVMpipe is a new and faster software renderer for Gallium3D (only for x86 processors):

The Gallium llvmpipe driver is a software rasterizer that uses LLVM to do runtime code generation. Shaders, point/line/triangle rasterization and vertex processing are implemented with LLVM IR which is translated to x86 or x86-64 machine code. Also, the driver is multithreaded to take advantage of multiple CPU cores (up to 8 at this time). It’s the fastest software rasterizer for Mesa.



Both Nouveau and LLVMpipe renderers expose the OpenGL 2.1 API, imposed by the current version of Mesa3D (v7.11). Mesa3D v8.0 will expose OpenGL 3.0, at least for Intel Sandy and Ivy Bridge GPUs.

The following screenshot shows an OpenGL 2 test with the LLVMpipe renderer (LLVMpipe because I plugged a GTX 580 which is not yet supported by Nouveau):

OpenGL test, Gallium3D llvmpipe software renderer

The demo runs at around 100 FPS, not bad for a software renderer.

For older graphics cards, Nouveau comes with hardware accelerated renderers. For example, the GeForce GTX 280 is drived by the NVA0 renderer. This page lists all NVIDIA codenames used in Nouveau 3D driver.

I plugged a GTX 280 in place of the GTX 580 and relaunched my OpenGL test:

Nouveau driver, OpenGL test, NVA0 renderer

400 FPS for the Nouveau NVA0 renderer, nice!

According to this page about the freshly released Linux 3.2 kernel, the Nouveau driver will be available shortly with hardware support of GTX 500 series via the NVC8 renderer:

The Nouveau driver now uses the acceleration functions that are available with the auto-generated firmware on the Fermi graphic cores NVC1 (GeForce GT 415M, 420, 420M, 425M, 430, 435M, 525M, 530, 540M, 550M and 555, as well as Quadro 600 and 1000M), NVC8 (GeForce GTX 560 Ti OEM, 570, 580 and 590 as well as Quadro 3000M, 4000M and 5010M) and NVCF (GeForce GTX 550 Ti and 560M) chips; Linux 3.2 will be the first kernel version to support the latter graphics chips. Several other kernel modifications provide the foundations for power saving features in the Nouveau driver which future kernel versions are expected to use.

Can’t wait to test it!

Now the question: Nouveau driver is it fast? Mainly compared to NVIDIA proprietary driver?

To bring the answer, I tested my OpenGL demo under BackBox Linux 2.01. I love all these Linux distros ;)

BackBox is based on Ubuntu and comes with NVIDIA R270.41 (or I got R270.41 via the update, I can’t remember). I launched my test app:

Backbox Linux, OpenGL test, R270.41 driver, GeForce GTX 280

Ouch! More than 5000 FPS (still with the GTX 280).

And with a GeForce GTX 480?

This time, it’s the Nouveau NVC0 renderer that is used to drive the GTX 480 under Dreamlinux 5.0:

Dreamlinux, OpenGL test, GeForce GTX 480, Nouveau NVC0 renderer

Around 600 FPS.

Now, the GTX 480 with NVIDIA proprietary driver under Backbox 2.01:

Backbox Linux, OpenGL test, R270.41 driver, GeForce GTX 480

Around 10000 FPS :D

Phoronix has a performance test (Nouveau’s OpenGL Performance Approaches The NVIDIA Driver) with entry level cards cards. For GPUs with few shader cores, the Nouveau driver approaches the performances of NVIDIA one but for GPUs with more shader cores, the gap is larger. And for recent GPUs, the gap is… abyssal!

The road to reach NVIDIA’s drivers performance for recent GeForce cards is long, very long…

If you’re not a Linux noob like me (actually even if you’re a linux noob!), don’t hesitate to bring additional information or to point out my mistakes. Always Share Your Knowledge!

Some references:


Update (2012.01.11): here are the kernel versions for Dreamlinux and Backbox Linux:

Dreamlinux, kernel information

Backbox Linux, kernel information





(Test) ASIC Quality of GeForce GPUs

$
0
0

NVIDIA GeForce GPU

The latest GPU-Z 0.5.8 includes a new feature that displays the quality of the GPU (ASIC quality) of recent graphics cards (GeForce GTX 400, GTX 500 and Radeon HD 7800, HD 7900 series). Not all GPUs on a silicon wafer have the same quality, some dies are finer than other. The finest dies are reserved for ultra-high end cards (like MSI’s Lightning series, ASUS Matrix, EVGA Classified for example) while pieces of silicon with large amount of electrical leaks are found in entry level cards… The ASIC quality should allow to quickly know if a GPU is overclockable or not.

Here are the ASIC qualities for some GeForce cards:


MSI GeForce GTX 580 Lightning – ASIC quality: 96.3%
GPU-Z, ASIC quality, MSI GeForce GTX 580 Lightning


EVGA GeForce GTX 480 – ASIC quality: 88.6%
GPU-Z, ASIC quality, EVGA GeForce GTX 480


MSI GeForce GTX 470 – ASIC quality: 85.4%
GPU-Z 0.5.8, ASIC quality of a MSI GTX 470


MSI GeForce GTX 460 Cyclone (sample number one) – ASIC quality: 75.4%
GPU-Z 0.5.8, ASIC quality of a MSI GTX 460


MSI GeForce GTX 460 Cyclone (sample number two) – ASIC quality: 62.0%
GPU-Z 0.5.8, ASIC quality of a MSI GTX 460


EVGA GeForce GTX 580 SC – ASIC quality: 60.0%
GPU-Z, ASIC quality, EVGA GeForce GTX 580 SC


ASUS GeForce GT 520 Silent – ASIC quality: 56.3%
GPU-Z, ASIC quality, ASUS GeForce GT 520



As you can see with MSI’s GTX 460 Cyclone samples, you can’t rely too much on ASIC quality to choose a graphics card because even on the same model of graphics card (here MSI GeForce GTX 460 Cyclone 768D5 OC), ASIC quality values can be quite different. As I said, finest GPUs are found in high-end graphics card (MSI’s Lightning: > 96%) while worst ones are found in entry level card (ASUS GT 520: 56%).

This test with two identical cards (MSI Cyclone) is interesting because it shows us that ASIC quality detection is not based on a database. Then the question: how ASIC quality detection is done and, above all, is it reliable?

And you, what is the ASIC quality of your GPU?


Update (2012.01.23): the ASIC quality detection is probably based on the GPU voltage. Here what AMD’s Dave Baumann says:

Actually, it does the opposite! We scale the voltage based on leakage, so the higher leakage parts use lower voltage and the lower leakage parts use a higher voltage – what this is does narrow the entire TDP range of the product.

Everything is qualified at worst case anyway; all the TDP calcs and the fan settings are completed on the wors case for the product range.

Update (2012.01.23): According to Geeks3D’s readers screenshots, the ASIC detection seems to have some problems: values can be greater than 100%. So wait for the next versions of GPU-Z to see if values above 100% come from a simple bug or not…



GeForce GTX 465 – ASIC quality: 101.4% – (user: Voodootool)
GPU-Z, ASIC quality, GeForce GTX 465


EVGA GeForce GTX 560 Ti – ASIC quality: 102.3% – (user: Woodz)
GPU-Z, ASIC quality, EVGA GeForce GTX 560 Ti


MSI GeForce GTX 570 – ASIC quality: 136.0% – (user: R.I.P)
GPU-Z, ASIC quality, GeForce GTX 570




(Test) AMD Catalyst 8.921.2 RC11 for Radeon HD 7900, Big Performance Boost in OpenGL Tessellation (*** Updated ***)

$
0
0

TessMark, OpenGL Tessellation Benchmark

AMD has released a new release candidate of the upcoming WHQL driver (Cat 12.01). This new RC11 brings important performance gains in OpenGL tessellation, especially when high level of tessellation are required. According to the release notes, there is a big boost in TessMark 0.3.0 with insane level (tess factor of 64X):

Performance highlights of the 8.921.2 RC11 AMD Radeon™ HD 7900 driver
8% (up to) performance improvement in Aliens vs. Predator
15% (up to) performance improvement in Battleforge with Anti-Aliasing enabled
3% (up to) performance improvement in Battlefield 3
3% (up to) performance improvement in Crysis 2
6% (up to) performance improvement in Crysis Warhead
10% (up to) performance improvement in F1 2010
5% (up to) performance improvement in Unigine with Anti-Aliasing enabled
250% (up to) performance improvement in TessMark (OpenGL) when set to “insane” levels



I don’t have a HD 7970 yet, only a HD 6970. So let’s see if the performance boost is also visible on a HD 6900. Here is the comparison between Catalyst 11.6 (the last Catalyst that brought important gains in OpenGL tessellation, see HERE) and Catalyst 8.921.2 RC11:

TessMark settings: map set 1, 1920×1080 fullscreen, 60 seconds, no AA, no postfx:

- tess level: moderate (X8) – Gain: -1.6%

Cat11.6: 44090 points, 735 FPS – SAPPHIRE Radeon HD 6970
Cat 8.921.2 RC11: 43364 points, 723 FPS – SAPPHIRE Radeon HD 6970

- tess level: normal (X16) – Gain: +0.4%

Cat11.6: 19398 points, 323 FPS – SAPPHIRE Radeon HD 6970
Cat 8.921.2 RC11: 19480 points, 325 FPS – SAPPHIRE Radeon HD 6970

- tess level: Extreme (X32) – Gain: +27.5%

Cat11.6: 3397 points, 57 FPS – SAPPHIRE Radeon HD 6970
Cat 8.921.2 RC11: 4334 points, 72 FPS – SAPPHIRE Radeon HD 6970

- tess level: Extreme (X64) – Gain: +80.6%

Cat11.6: 594 points, 10 FPS – SAPPHIRE Radeon HD 6970
Cat 8.921.2 RC11: 1073 points, 18 FPS – SAPPHIRE Radeon HD 6970
Cat 8.921.2 RC11: 560 points, 10 FPS – SAPPHIRE Radeon HD 6970, TessMark renamed (toto.exe)

Indeed, there is a huge performance boost with high level of tessellation (X32 and X64) while scores for regular levels of tessellation (X8 and X16) remain the same. I hope I could test a HD 7970 shortly…


UPDATE (2012.01.24): I received some explanations from AMD: the performance improvement in tessellation is generic for applications with large tessellation factors (read greater than 16, the max being 64) but is not necessarily optimal for more reasonnable settings (like X8). That’s why AMD decided to enable the performance improvement on a per application basis.

I added a score with TessMark renamed in toto.exe to show the difference between the performance improvement enabled and disabled.


You can download Catalyst 8.921.2 RC11 for Radeon HD 7900 HERE.

Catalyst 8.921.2 RC11 is an OpenGL 4.2 driver with 233 OpenGL extension:

- Drivers Version: 8.921.2.0 – Catalyst 11.12 (1-19-2012)
- ATI Catalyst Release Version String: 8.921.2-120119a-132101E-ATI
- OpenGL Version: 4.2.11338 Compatibility Profile/Debug Context
- OpenGL Extensions: 233 extensions (GL=212 and WGL=21)

GPU Caps Viewer
GPU Caps Viewer 1.14.6



Compared to Catalyst 11.10 preview 3,
there is one new extension:






Here is the complete list of all OpenGL extensions exposed for a Radeon HD 6970 (Win7 64-bit):

  • GL_AMDX_debug_output
  • GL_AMDX_vertex_shader_tessellator
  • GL_AMD_blend_minmax_factor
  • GL_AMD_conservative_depth
  • GL_AMD_debug_output
  • GL_AMD_depth_clamp_separate
  • GL_AMD_draw_buffers_blend
  • GL_AMD_multi_draw_indirect
  • GL_AMD_name_gen_delete
  • GL_AMD_performance_monitor
  • GL_AMD_pinned_memory
  • GL_AMD_sample_positions
  • GL_AMD_seamless_cubemap_per_texture
  • GL_AMD_shader_stencil_export
  • GL_AMD_shader_trace
  • GL_AMD_texture_cube_map_array
  • GL_AMD_texture_texture4
  • GL_AMD_transform_feedback3_lines_triangles
  • GL_AMD_vertex_shader_tessellator
  • GL_ARB_ES2_compatibility
  • GL_ARB_base_instance
  • GL_ARB_blend_func_extended
  • GL_ARB_color_buffer_float
  • GL_ARB_compressed_texture_pixel_storage
  • GL_ARB_conservative_depth
  • GL_ARB_copy_buffer
  • GL_ARB_debug_output
  • GL_ARB_depth_buffer_float
  • GL_ARB_depth_clamp
  • GL_ARB_depth_texture
  • GL_ARB_draw_buffers
  • GL_ARB_draw_buffers_blend
  • GL_ARB_draw_elements_base_vertex
  • GL_ARB_draw_indirect
  • GL_ARB_draw_instanced
  • GL_ARB_explicit_attrib_location
  • GL_ARB_fragment_coord_conventions
  • GL_ARB_fragment_program
  • GL_ARB_fragment_program_shadow
  • GL_ARB_fragment_shader
  • GL_ARB_framebuffer_object
  • GL_ARB_framebuffer_sRGB
  • GL_ARB_geometry_shader4
  • GL_ARB_get_program_binary
  • GL_ARB_gpu_shader5
  • GL_ARB_gpu_shader_fp64
  • GL_ARB_half_float_pixel
  • GL_ARB_half_float_vertex
  • GL_ARB_imaging
  • GL_ARB_instanced_arrays
  • GL_ARB_internalformat_query
  • GL_ARB_map_buffer_alignment
  • GL_ARB_map_buffer_range
  • GL_ARB_multisample
  • GL_ARB_multitexture
  • GL_ARB_occlusion_query
  • GL_ARB_occlusion_query2
  • GL_ARB_pixel_buffer_object
  • GL_ARB_point_parameters
  • GL_ARB_point_sprite
  • GL_ARB_provoking_vertex
  • GL_ARB_sample_shading
  • GL_ARB_sampler_objects
  • GL_ARB_seamless_cube_map
  • GL_ARB_separate_shader_objects
  • GL_ARB_shader_atomic_counters
  • GL_ARB_shader_bit_encoding
  • GL_ARB_shader_image_load_store
  • GL_ARB_shader_objects
  • GL_ARB_shader_precision
  • GL_ARB_shader_stencil_export
  • GL_ARB_shader_subroutine
  • GL_ARB_shader_texture_lod
  • GL_ARB_shading_language_100
  • GL_ARB_shading_language_420pack
  • GL_ARB_shading_language_packing
  • GL_ARB_shadow
  • GL_ARB_shadow_ambient
  • GL_ARB_sync
  • GL_ARB_tessellation_shader
  • GL_ARB_texture_border_clamp
  • GL_ARB_texture_buffer_object
  • GL_ARB_texture_buffer_object_rgb32
  • GL_ARB_texture_compression
  • GL_ARB_texture_compression_bptc
  • GL_ARB_texture_compression_rgtc
  • GL_ARB_texture_cube_map
  • GL_ARB_texture_cube_map_array
  • GL_ARB_texture_env_add
  • GL_ARB_texture_env_combine
  • GL_ARB_texture_env_crossbar
  • GL_ARB_texture_env_dot3
  • GL_ARB_texture_float
  • GL_ARB_texture_gather
  • GL_ARB_texture_mirrored_repeat
  • GL_ARB_texture_multisample
  • GL_ARB_texture_non_power_of_two
  • GL_ARB_texture_query_lod
  • GL_ARB_texture_rectangle
  • GL_ARB_texture_rg
  • GL_ARB_texture_rgb10_a2ui
  • GL_ARB_texture_snorm
  • GL_ARB_texture_storage
  • GL_ARB_timer_query
  • GL_ARB_transform_feedback2
  • GL_ARB_transform_feedback3
  • GL_ARB_transform_feedback_instanced
  • GL_ARB_transpose_matrix
  • GL_ARB_uniform_buffer_object
  • GL_ARB_vertex_array_bgra
  • GL_ARB_vertex_array_object
  • GL_ARB_vertex_attrib_64bit
  • GL_ARB_vertex_buffer_object
  • GL_ARB_vertex_program
  • GL_ARB_vertex_shader
  • GL_ARB_vertex_type_2_10_10_10_rev
  • GL_ARB_viewport_array
  • GL_ARB_window_pos
  • GL_ATI_draw_buffers
  • GL_ATI_envmap_bumpmap
  • GL_ATI_fragment_shader
  • GL_ATI_meminfo
  • GL_ATI_separate_stencil
  • GL_ATI_texture_compression_3dc
  • GL_ATI_texture_env_combine3
  • GL_ATI_texture_float
  • GL_ATI_texture_mirror_once
  • GL_EXT_abgr
  • GL_EXT_bgra
  • GL_EXT_bindable_uniform
  • GL_EXT_blend_color
  • GL_EXT_blend_equation_separate
  • GL_EXT_blend_func_separate
  • GL_EXT_blend_minmax
  • GL_EXT_blend_subtract
  • GL_EXT_compiled_vertex_array
  • GL_EXT_copy_buffer
  • GL_EXT_copy_texture
  • GL_EXT_direct_state_access
  • GL_EXT_draw_buffers2
  • GL_EXT_draw_instanced
  • GL_EXT_draw_range_elements
  • GL_EXT_fog_coord
  • GL_EXT_framebuffer_blit
  • GL_EXT_framebuffer_multisample
  • GL_EXT_framebuffer_object
  • GL_EXT_framebuffer_sRGB
  • GL_EXT_geometry_shader4
  • GL_EXT_gpu_program_parameters
  • GL_EXT_gpu_shader4
  • GL_EXT_histogram
  • GL_EXT_multi_draw_arrays
  • GL_EXT_packed_depth_stencil
  • GL_EXT_packed_float
  • GL_EXT_packed_pixels
  • GL_EXT_pixel_buffer_object
  • GL_EXT_point_parameters
  • GL_EXT_provoking_vertex
  • GL_EXT_rescale_normal
  • GL_EXT_secondary_color
  • GL_EXT_separate_specular_color
  • GL_EXT_shader_image_load_store
  • GL_EXT_shadow_funcs
  • GL_EXT_stencil_wrap
  • GL_EXT_subtexture
  • GL_EXT_texgen_reflection
  • GL_EXT_texture3D
  • GL_EXT_texture_array
  • GL_EXT_texture_buffer_object
  • GL_EXT_texture_compression_bptc
  • GL_EXT_texture_compression_latc
  • GL_EXT_texture_compression_rgtc
  • GL_EXT_texture_compression_s3tc
  • GL_EXT_texture_cube_map
  • GL_EXT_texture_edge_clamp
  • GL_EXT_texture_env_add
  • GL_EXT_texture_env_combine
  • GL_EXT_texture_env_dot3
  • GL_EXT_texture_filter_anisotropic
  • GL_EXT_texture_integer
  • GL_EXT_texture_lod
  • GL_EXT_texture_lod_bias
  • GL_EXT_texture_mirror_clamp
  • GL_EXT_texture_object
  • GL_EXT_texture_rectangle
  • GL_EXT_texture_sRGB
  • GL_EXT_texture_shared_exponent
  • GL_EXT_texture_snorm
  • GL_EXT_texture_storage
  • GL_EXT_texture_swizzle
  • GL_EXT_timer_query
  • GL_EXT_transform_feedback
  • GL_EXT_vertex_array
  • GL_EXT_vertex_array_bgra
  • GL_EXT_vertex_attrib_64bit
  • GL_IBM_texture_mirrored_repeat
  • GL_KTX_buffer_region
  • GL_NV_blend_square
  • GL_NV_conditional_render
  • GL_NV_copy_depth_to_color
  • GL_NV_copy_image
  • GL_NV_explicit_multisample
  • GL_NV_float_buffer
  • GL_NV_half_float
  • GL_NV_primitive_restart
  • GL_NV_texgen_reflection
  • GL_NV_texture_barrier
  • GL_SGIS_generate_mipmap
  • GL_SGIS_texture_edge_clamp
  • GL_SGIS_texture_lod
  • GL_SUN_multi_draw_arrays
  • GL_WIN_swap_hint
  • WGL_EXT_swap_control
  • WGL_ARB_extensions_string
  • WGL_ARB_pixel_format
  • WGL_ATI_pixel_format_float
  • WGL_ARB_pixel_format_float
  • WGL_ARB_multisample
  • WGL_EXT_swap_control_tear
  • WGL_ARB_pbuffer
  • WGL_ARB_render_texture
  • WGL_ARB_make_current_read
  • WGL_EXT_extensions_string
  • WGL_ARB_buffer_region
  • WGL_EXT_framebuffer_sRGB
  • WGL_ATI_render_texture_rectangle
  • WGL_EXT_pixel_format_packed_float
  • WGL_I3D_genlock
  • WGL_NV_swap_group
  • WGL_ARB_create_context
  • WGL_AMD_gpu_association
  • WGL_AMDX_gpu_association
  • WGL_ARB_create_context_profile



Source: Geeks3D forum

AMD Radeon HD 7900 Series Direct3D 11 Demo: Leo in Sneeze The Day

$
0
0

AMD Radeon HD 7900 Series Direct3D 11 Demo: Leo in Sneeze The Day



AMD has released a Direct3D 11 tech-demo to showcase the performance of Radeon HD 7900 series. This tech-demo is based on a (home-made?) engine in version 3.3 (build 1). Seems AMD ditched the Trinigy engine used in the previous HK2207 tech-demo. This D3D11 demo uses Bullet for physics and Fmod for the sound.

The Leo demo showcases a real-time, DirectX® 11 based lighting pipeline that is designed to allow for rendering scenes made of arbitrarily complex materials (including transparencies), multiple lighting models, and minimal restrictions on the number of lights that can be used — all while supporting hardware MSAA and efficient memory usage.

Specifically, this demo uses DirectCompute to cull and manage lights in a scene. The end result is a per-pixel or per-tile list of lights that forward-render based shaders use for lighting each pixel. This technique also allows for adding one bounce global illumination effects by spawning virtual point light sources where light strikes a surface. Finally, the lighting in this demo is physically based in that it is fully HDR and the material and reflection models take advantage of the ALU power of the AMD Radeon HD 7900 GPU to calculate physically accurate light and surface interactions (multiple BRDF equations, realistic use of index of refraction, absorption based on wavelength for metals, etc).

You can download the demo from this page (an account is required).

I tested this 740MB demo on two Radeon HD 6970 in CrossFire with an average framerate of 40 FPS. With one HD 6970, the FPS was around 20.

Hi-resolution pictures of the demo can be found here: AMD HD7900 Direct3D 11 Leo tech demo – (17 pictures total).

AMD Radeon HD 7900 Series Direct3D 11 Demo: Leo in Sneeze The Day

AMD Radeon HD 7900 Series Direct3D 11 Demo: Leo in Sneeze The Day

AMD Radeon HD 7900 Series Direct3D 11 Demo: Leo in Sneeze The Day

AMD Radeon HD 7900 Series Direct3D 11 Demo: Leo in Sneeze The Day



This demo is much better than the previous tech-demo for HD 6900 series. But there are still parts that lack of something to be really cool. In the following screeny, the contact between Leo’s feet and the ground is not realistic. A touch of AO (ambient occlusion) or/and a bit of shadowing can improve the rendering…

AMD Radeon HD 7900 Series Direct3D 11 Demo: Leo in Sneeze The Day



Source: Geeks3D forum

LuxMark 2.0, OpenCL Benchmark for GPUs and CPUs

$
0
0

LuxMark 2.0, OpenCL Benchmark for GPUs and CPUs


Around one year after version 1.0>, the new version of LuxMark, an OpenCL benchmark based on LuxRender, is available. LuxRender is a physically based and unbiased rendering engine. Based on state of the art algorithms, LuxRender simulates the flow of light according to physical equations, thus producing realistic images of photographic quality.

OpenCL logo



This new version of LuxMark 2.0 comes with a new rendering engine with multi-platform OpenCL support, two new benchmarks and the ability to run the benchmarks on selected compute devices (GPUs and / or CPUs).

Quick test:

Scene: Luxball

LuxMark 2.0, OpenCL Benchmark for GPUs and CPUs
Scene: Luxball – Score: 7872 points (Radeon HD 6970)


LuxMark 2.0, OpenCL Benchmark for GPUs and CPUs
Scene: Luxball – Score: 3120 points (GeForce GTX 460 + GeForce GT 520)



Scene: Sala

LuxMark 2.0, OpenCL Benchmark for GPUs and CPUs
Scene: Sala – Score: 844 points (Radeon HD 6970)


LuxMark 2.0, OpenCL Benchmark for GPUs and CPUs
Scene: Sala – Score: 456 points (GeForce GTX 460 + GeForce GT 520)



Source: Geeks3D forum





A Strong PSU is Required for EVGA’s GTX 580 Classified!

$
0
0

EVGA GTX 580 Classified



Few days ago, I received one of the best GTX 580 in the world: EVGA’s Classified model with 3GB of graphics memory. I plugged this big card on my testbed and… ouch! No way to start the system. The system starts and half a second later the PSU shuts down. WTF? The PSU is a Thermaltake Toughpower Grand 1050W (I reviewed the 850W here). Seems the GTX 580 Classified (I think the responsible is the 14+3-phase power circuitry –VRM) is a bit too greedy and needs a PSU that can support some important peak current at the startup. Like an electrical engine!

I replaced Thermaltake’s PSU by Corsair’s AX1200 one. Yes! My testbed can now start up!

Conclusion: with an ultra-high-end graphics card like the GTX 580 Classified, always check your PSU: can it handle your new monster?

EVGA GTX 580 Classified, Geeks3D's testbed



More about this awesome card in an upcoming review!

Unigine Heaven 3.0 Released

$
0
0

Unigine Heaven 3.0


The guys at Unigine Corp have update their popular tessellation oriented benchmark for Direct3D (11) and OpenGL (4). This new version adds the support of Mac OS X 10.7+ as well as Intel HD 3000 processors. MacOSX and Intel HD 3000 do not support tessellation (I should release a version of TessMark without tessellation too ;) )…

More information can be found HERE and download links are available HERE.

Here is the comparison between Heaven 2.1 and Heaven 3.0 with an ASUS Radeon HD 7770 DC with Catalyst 12.2:

Settings: 1920×1080 fullscreen, tessellation normal, anisotropy: X16, MSAA: X4, shaders: high.

Heaven 2.1

  • OpenGL 4 result – FPS: 20.1, Scores: 506
  • Direct3D 11 result – FPS: 28.6, Scores: 720

Heaven 3.0

  • OpenGL 4 result – FPS: 21.9, Scores: 551
  • Direct3D 11 result – FPS: 36.2, Scores: 911



The difference in Direct3D results is a bit too important. I think I will keep Heaven 2.1 for my graphics cards reviews, don’t have time to rebench all cards…

Source: Geeks3D forum

CLBenchmark: New OpenCL Benchmark for Windows (Tested: HD 7970 vs GTX 680)

$
0
0

CLBenchmark - OpenCL benchmark



CLBenchmark is a new OpenCL 1.1 benchmark for Windows (I don’t know if other OSes like Linux or MacOSX are or will be supported). CLBenchmark includes 17 different OpenCL tests. You can run them all or individually.

CLBenchmark 1.1 Desktop Edition is an easy-to-use tool for comparing the computational performance of different platforms. It offers an unbiased way of testing and comparing the performance of implementations of OpenCL 1.1, a royalty-free standard for heterogenous parallel programming.

CLBenchmark compares the strengths and weaknesses of different hardware architectures such as CPUs, GPUs and APUs. The test results are listed in a transparent and public OpenCL performance database.

CLBenchmark 1.1 Desktop Edition supports any standard-compliant OpenCL 1.1 implementation and it is compatible with every major vendor’s solution.

CLBenchmark - OpenCL benchmark



I quickly tested it with a Radeon HD 7970 and a GeForce GTX 680… But I’m sure you already know the result ;)

MSI Radeon HD 7970
CLBenchmark - OpenCL benchmark



EVGA GeForce GTX 680
CLBenchmark - OpenCL benchmark



You can download CLBenchmark from this link. CLBenchmark homepage can be found HERE.




Intel Ivy Bridge HD Graphics 4000 GPU: OpenGL and OpenCL Tests

$
0
0

Intel Ivy Bridge, OpenGL and OpenCL tests




Intel Ivy Bridge HD 4000 GPU test – Index


1 – Ivy Bridge Overview

Intel has officially launched Ivy Bridge, its new family of processors that combine a CPU and a GPU on the same die. Ivy Bridge is a tick, an improvment of the Sandy Bridge processor (which is a tock in Intel’s terminology).

Intel, tick tock processors



The Ivy Bridge processor (LGA 1155 socket) is based on the new 22nm technology (Sandy Bridge: 32nm), incorporating Intel’s new tri-gate (or 3D) transistor technology and packs a 4-core CPU and a 16-EU (or 16 shader cores) GPU:

Intel, Ivy Bridge architecture



Ivy Bridge GPU is codenamed HD 4000. This GPU has 16 EUs (Execution Unit with 8threads/EU), 2 texture units and big new thing, it’s a DX11 GPU. Yes that means you can do hardware tessellation with a HD 4000. Currently, Intel provides a Direct3D 11 driver only so currently, we have to forget OpenGL tessellation. But it’s only a matter of time and I’m sure (I hope…) Intel will release shortly an OpenGL 4.x capable driver.

Intel, Ivy Bridge architecture



Here are some details about the Ivy Bridge processor I used for this article:

Intel, Ivy Bridge processor, CPU-Z

Intel, Ivy Bridge processor, GPU Caps Viewer








Intel Ivy Bridge HD 4000 GPU test – Index


Intel Ivy Bridge HD Graphics 4000 Extra Tests

$
0
0

Intel Ivy Bridge Processor

As requested by some readers in the Intel Ivy Bridge HD Graphics 4000 GPU: OpenGL and OpenCL Tests article, here some additional tests of the HD Graphics 4000, the GPU embedded in the Ivy Bridge processor.


AMD HK2207 DX11 demo

HK2207 is a Direct3D 11 tech-demo for AMD Radeon HD 6900 series. Unfortunately, on Intel’s HD 4000, only few flares are visible:

AMD HK2207 DX11 demo


AMD LadyBug DX11 demo

LadyBug is a Direct3D 11 tech-demo released for the Radeon HD 5800 series. Good news, this demo works fine on the HD 4000. But it runs very very slowly: around 1 FPS…

AMD LadyBug DX11 demo


NVIDIA Island DX11 tessellation demo

Island is a Direct3D 11 tessellation-focused tech-demo released for the GeForce GTX 480 / GTX 470. This demo runs at around 3 FPS on the HD 4000.

NVIDIA Island DX11 demo


3DMark11 DX11 benchmark

I tested the Entry (E) and Performance (P) scores only:

Intel Ivy Bridge HD 4000 - 3DMark11 Entry score

Intel Ivy Bridge HD 4000 - 3DMark11 Performance score


Unigine Heaven 2.1 DX11 test

I tested Heaven 2.1 with default settings.

Intel Ivy Bridge HD 4000 - Unigine Heaven


GPCBenchmarkOCL OpenCL test

GPCBenchmarkOCL is an OpenCL 1.1 test. It includes different OpenCL tests:

Intel Ivy Bridge HD 4000 - GPCBenchmarkOCL OpenCL test

Intel Ivy Bridge HD 4000 - GPCBenchmarkOCL OpenCL test
Common maths test: works fine.

Intel Ivy Bridge HD 4000 - GPCBenchmarkOCL OpenCL test
Int32 test: works fine.

Intel Ivy Bridge HD 4000 - GPCBenchmarkOCL OpenCL test
SHA1 test: works fine.

Intel Ivy Bridge HD 4000 - GPCBenchmarkOCL OpenCL test
Global memory test: does not work.

Laguna: Real-time OpenCL Path Tracer

$
0
0

Laguna: Real-time OpenCL Pathtracer



Laguna is a new small OpenCL path tracer (ray tracer) based on a precalculated blocks structure.

OpenCL logo



I tested Laguna on an EVGA GeForce GTX 680 (R301.24) and a MSI Radeon HD 7970 (Cat 12.4) at a resolution of 800×600 and 1920×1080 (you can change settings in the scene.txt file):

1920×1080 fullscreen:

  • GTX 680: 12 FPS
  • HD 7970: 13 FPS



800×600 windowed:

  • GTX 680: 41 FPS
  • HD 7970: 51 FPS






You can download Laguna here (left-click to grab the file):
Download Laguna OpenCL path tracer Version 1.0



Thanks to Arjan van Wijnen (the author of Laguna) for the news!

Laguna: Real-time OpenCL Pathtracer

(PhysX Test) GTX 680 vs GTX 580 vs GTX 480 in FluidMark

$
0
0

PhysX test - FluidMark - GTX 680 vs GTX 580 vs GTX 480

Some readers asked me how much better is the GeForce GTX 680 in PhysX compared to previous GTX 580 and GTX 480. To bring an element of answer, I quickly tested the GTX 480, GTX 580 and GTX 680 with FluidMark 1.5.0 with different settings (on H67 testbed). Graphics drivers: R301.24.

EVGA GeForce GTX 680



The GeForce GTX 480 is the reference card (score is 100%).


Test 1 – Preset:720 (30000 SPH particles, 1280×720 fullscreen)

EVGA GeForce GTX 480 (GPU@700MHz, mem@1848MHz)
- 4699 points, 77 FPS (100%)
EVGA GeForce GTX 580 (GPU@797MHz, mem@2025MHz)
- 5590 points, 92 FPS (118%)
EVGA GeForce GTX 680 (GPU@1097MHz, mem@3004MHz)
- 8395 points, 137 FPS (178%)


Test 2 – Preset:1080 (60000 SPH particles, 1920×1080 fullscreen)

EVGA GeForce GTX 480 (GPU@700MHz, mem@1848MHz)
- 1509 points, 25 FPS (100%)
EVGA GeForce GTX 580 (GPU@797MHz, mem@2025MHz)
- 1807 points, 30 FPS (119%)
EVGA GeForce GTX 680 (GPU@1097MHz, mem@3004MHz)
- 3576 points, 59 FPS (236%)


Test 3 – Custom settings: 120000 SPH particles, 1920×1080 fullscreen

PhysX test - Custom settings, 120k particles



EVGA GeForce GTX 480 (GPU@700MHz, mem@1848MHz)
- 253 points, 21 FPS (100%)
EVGA GeForce GTX 580 (GPU@797MHz, mem@2025MHz)
- 300 points, 25 FPS (118%)
EVGA GeForce GTX 680 (GPU@1097MHz, mem@3004MHz)
- 530 points, 44 FPS (209%)



If we took the GTX 480 as the reference card (100%), the GTX 580 is around 19% faster while the GTX 680 is 78% up to 136% faster in these PhysX fluids tests. The performance boost of the GTX 680 scores is rather impressive…



PhysX test - FluidMark score, GTX 680

PhysX test - FluidMark


Intel Ivy Bridge HD Graphics 4000 GPU: OpenGL 4 Tessellation Tested

$
0
0

Intel Ivy Bridge HD Graphics 4000 GPU: OpenGL 4 Tessellation Tested


The version 2729 of Intel HD Graphics driver has been released with OpenGL 4 support. It’s rather a big surprise even if it was somewhat foreseeable. So let’s see how the HD Graphics 4000 GPU (Ivy Bridge processor) handles the OpenGL 4 tessellation in the following tests: TessMark 0.3.0 and Unigine Heaven 2.1.

- Testbed for discrete graphics cards: Asus Z77
- Testbed for Ivy Bridge CPU/GPU: Gigabyte Z77


1 – TessMark: OpenGL 4 Tessellation Benchmark

Intel HD Graphics 4000, TessMark, OpenGL 4

TessMark works like a charm with the version 2719 of Intel HD Graphics driver. This is really a good start for next versions of the OpenGL 4 driver.

Settings: 1920×1080 fullscreen, map set 1, no postfx, no AA.

Tessellation level: moderate (X8)

- EVGA GeForce GTX 680, R301.24 (Win 7 64-bit)
- 65742 points, 1097 FPS
- MSI Radeon HD 7970, Cat 12.4 (Win 7 64-bit)
- 61961 points, 1032 FPS
- ATI Radeon HD 5770, Cat 12.4 (Win 7 64-bit)
- 23436 points, 391 FPS
- Intel Ivy Bridge Core i7 (2.2GHz), HD 4000, driver v8.15.10.2729 (Win 7 64-bit)
- 6337 points, 106 FPS,

Intel HD Graphics 4000, TessMark, OpenGL 4



Tessellation level: normal (X16)

- EVGA GeForce GTX 680, R301.24 (Win 7 64-bit)
- 46521 points, 776 FPS
- MSI Radeon HD 7970, Cat 12.4 (Win 7 64-bit)
- 38583 points, 643 FPS
- ATI Radeon HD 5770, Cat 12.4 (Win 7 64-bit)
- 10974 points, 183 FPS
- Intel Ivy Bridge Core i7 (2.2GHz), HD 4000, driver v8.15.10.2729 (Win 7 64-bit)
- 3300 points, 55 FPS,

Tessellation level: extreme (X32)

- EVGA GeForce GTX 680, R301.24 (Win 7 64-bit)
- 24641 points, 411 FPS
- MSI Radeon HD 7970, Cat 12.4 (Win 7 64-bit)
- 14802 points, 247 FPS
- ATI Radeon HD 5770, Cat 12.4 (Win 7 64-bit)
- 2885 points, 48 FPS
- Intel Ivy Bridge Core i7 (2.2GHz), HD 4000, driver v8.15.10.2729 (Win 7 64-bit)
- 1565 points, 26 FPS,

Tessellation level: extreme (X64)

- EVGA GeForce GTX 680, R301.24 (Win 7 64-bit)
- 10446 points, 174 FPS
- MSI Radeon HD 7970, Cat 12.4 (Win 7 64-bit)
- 4373 points, 73 FPS
- Intel Ivy Bridge Core i7 (2.2GHz), HD 4000, driver v8.15.10.2729 (Win 7 64-bit)
- 767 points, 13 FPS,
- ATI Radeon HD 5770, Cat 12.4 (Win 7 64-bit)
- 611 points, 11 FPS

Incredible! Ivy Bridge HD 4000 GPU is faster than the Radeon HD 5770 for high level of tessellation. This is a very nice result!





2 – Unigine Heaven: OpenGL 4 Benchmark

Unigine Heaven




Unigine Heaven is a Direct3D 11 and OpenGL 4 benchmark. So let’s test the OpenGL 4 version.

Settings: 1920×1080 fullscreen, tessellation: normal, shaders: high, AA: 4X MSAA, 16X anisotropic filtering.

- EVGA GeForce GTX 680, R301.24 (Win 7 64-bit)
- 1661 points, 65.9 FPS
- MSI Radeon HD 7970, Cat 12.4 (Win 7 64-bit)
- 1199 points, 47.6 FPS
- ATI Radeon HD 5770, Cat 12.4 (Win 7 64-bit)
- 351 points, 13.9 FPS
- Intel Ivy Bridge Core i7 (2.2GHz), HD 4000, driver v8.15.10.2729 (Win 7 64-bit)
- 108 points, 4.3 FPS,

Unigine Heaven, OpenGL 4, Intel Ivy Bridge HD Graphics 4000

(Quick Test) PLA Direct3D 11 Benchmark (GTX680 vs GTX580 vs GTX480)

$
0
0

PLA Direct3D 11 Benchmark



PLA (or Passion Leads Army) is Direct3D 11 based on the Unreal Engine 3 game engine. The benchmark features hardware tessellation and PhysX-based physics simulations (flags, particles/explosions, etc.).

Here is a quick test with three graphics cards: a GeForce GTX 680, a GeForce GTX 580 and one GTX 480. I didn’t included Radeon cards because PhysX seems to be completely disabled (no CPU PhysX) with AMD’s cards.





The specs of the Z77 testbed used can be found HERE. I used the R301.24 beta drivers.



TEST 1 – default settings: 1280×720, texture quality: normal, PhysX: medium

EVGA GeForce GTX 680 (GPU@1097MHz, mem@3004MHz)
- 135 FPS
EVGA GeForce GTX 580 (GPU@797MHz, mem@2025MHz)
- 124 FPS
EVGA GeForce GTX 480 (GPU@700MHz, mem@1848MHz)
- 105 FPS



TEST 2 – custom settings: 1920×1080, texture quality: high, PhysX: high

PLA Direct3D 11 Benchmark settings

EVGA GeForce GTX 680 (GPU@1097MHz, mem@3004MHz)
- 75 FPS
EVGA GeForce GTX 580 (GPU@797MHz, mem@2025MHz)
- 62 FPS
EVGA GeForce GTX 480 (GPU@700MHz, mem@1848MHz)
- 50 FPS



You can find a download link in this thread on Geeks3D’s forum.




(Tested) A New Dawn: NVIDIA DX11 Tech-Demo for GTX 600 Released

$
0
0

A New Dawn, NVIDIA GTX 600 tech-demo



After some early details about this demo, NVIDIA has released A New Dawn, the first tech-demo for Kepler-based graphics cards. This Direct3D 11 demo targets GeForce GTX 670, GTX 680 and GTX 690 but it works also on Fermi-based cards.

In 2002, NVIDIA released a demo called Dawn to demonstrate the power and programmability of GeForce FX. It showcased a fairy character of extraordinary detail, seamless curves, and lifelike expressions. Among the many technology demos NVIDIA has released before and since, Dawn has remained the most memorable. It was the first time a fully animated and totally credible character was brought to life in real time. Even today, many games have yet to realize a character with Dawn’s level of detail.

Fast forward ten years, and NVIDIA has brought back Dawn once again, in a demo simply titled “A New Dawn.” A New Dawn is designed to showcase the graphical possibilities on the latest generation of Kepler based GPUs.

In A New Dawn, the demo starts not with the main character, but with a sweeping overview of a lush rainforest. As our character comes into view, we find her swinging on a vine in her new tree home. The tree is rendered to the finest level of detail using DirectX 11 tessellation. At its peak, over four million triangles are used to showcase Dawn’s environment.

Dawn’s skin has also received a complete overhaul. Human skin is one of the trickiest materials to simulate. Unlike a concrete object that only absorbs or transmits light, human skin is more akin to a block of jelly; light enters, jiggles around in multiple layers of skin and flesh, and exits in a new direction due to sub-surface scattering.

The original Dawn demo was the first to show a fully credible, 3D character in real time. The Nalu demo added detailed, physically simulated hair. The Adrianne Curry demo pushed the limits of realism in skin shading. A New Dawn demo is a synthesis of all of these demos, as well as over a decade of techniques and advancement made in the realm of realtime 3D graphics.

You can download the demo from THIS PAGE. A detailled article about the demo can be found here: A New Dawn DirectX 11 Demo Available For Download.

I tested the demo with an EVGA GeForce GTX 680. At full-hd resolution (1920×1080, Ultra Mode ON), the demo ran at around 25 FPS. The demo is very cool and graphics are nice. You can control several parameters like day light, depth of field or hair stiffness.

A New Dawn, NVIDIA GTX 600 tech-demo

A New Dawn, NVIDIA GTX 600 tech-demo

A New Dawn, NVIDIA GTX 600 tech-demo

A New Dawn, NVIDIA GTX 600 tech-demo

A New Dawn, NVIDIA GTX 600 tech-demo

A New Dawn, NVIDIA GTX 600 tech-demo

A New Dawn, NVIDIA GTX 600 tech-demo

A New Dawn, NVIDIA GTX 600 tech-demo

A New Dawn, NVIDIA GTX 600 tech-demo

A New Dawn, NVIDIA GTX 600 tech-demo

A New Dawn, NVIDIA GTX 600 tech-demo

A New Dawn, NVIDIA GTX 600 tech-demo

A New Dawn, NVIDIA GTX 600 tech-demo

A New Dawn, NVIDIA GTX 600 tech-demo

A New Dawn, NVIDIA GTX 600 tech-demo

A New Dawn, NVIDIA GTX 600 tech-demo

A New Dawn, NVIDIA GTX 600 tech-demo

A New Dawn, NVIDIA GTX 600 tech-demo

A New Dawn, NVIDIA GTX 600 tech-demo

A New Dawn, NVIDIA GTX 600 tech-demo

A New Dawn, NVIDIA GTX 600 tech-demo

A New Dawn, NVIDIA GTX 600 tech-demo

A New Dawn, NVIDIA GTX 600 tech-demo

A New Dawn, NVIDIA GTX 600 tech-demo

A New Dawn, NVIDIA GTX 600 tech-demo

A New Dawn, NVIDIA GTX 600 tech-demo

A New Dawn, NVIDIA GTX 600 tech-demo

Source: Geeks3D forum



(Quick Test) New 3DMark Gaming Benchmark Available

$
0
0

3DMark



The new 3DMark is available. You can download the Windows version from this page: 3DMark downloads.

You have to call it 3DMark and not 3DMark13 or something else. Here is the explanation from Futuremark:

3DMark



This new version is cross-platform (Windows, Windows RT, Android and iOS devices) and includes three tests:

  • Ice Storm (1280×720)
  • Cloud Gate (1280×720)
  • Fire Strike (default:1920×1080, extreme: 2560×1440)

3DMark


Ice Storm

Ice Storm is a cross-platform benchmark for mobile devices. Use it to test the performance of your smartphone, tablet, ultra-portable notebook or entry-level PC. Ice Storm includes two graphics tests focusing on GPU performance and a physics test targeting CPU performance.

On Windows, Ice Storm uses a DirectX 11 engine limited to Direct3D feature level 9, making it the ideal benchmark for modern portable devices targeting that feature level. On Android and iOS, Ice Storm uses OpenGL ES 2.0.

3DMark Ice Storm is the ideal benchmark for modern mobile devices such as tablets, netbooks, ultra-portable notebooks and entry-level PCs that support Direct3D feature level 9.

Ice Storm uses the same engine on all platforms and supports the following features.

  • Traditional forward rendering using one pass per light.
  • Scene updating and visibility computations are multithreaded.
  • Draw calls are issued from a single thread.
  • Support for skinned and static geometries.
  • Surface lighting model is basic Blinn Phong.
  • Supported light types include unshadowed point light and optionally shadow mapped directional light as well as pre-computed environmental cube.
  • Support for transparent geometries and particle effects.
  • 16-bit color formats are used in illumination buffers if supported by the hardware.



Here are my scores with a GTX 660 and a HD 7970:

3DMark Ice Storm score - GeForce GTX 660

3DMark Ice Storm score - Radeon HD 7970



3DMark Ice Storm

3DMark Ice Storm


Cloud Gate

Cloud Gate is a new test designed for Windows notebooks and typical home PCs. It is a particularly good benchmark for systems with integrated graphics. Cloud Gate includes two graphics tests and a physics test. The benchmark uses a DirectX 11 engine limited to Direct3D feature level 10 making it suitable for testing DirectX 10 compatible hardware. Cloud Gate is only available in the Windows edition of 3DMark.

Cloud Gate tests use same engine as Fire Strike, but with a reduced set of features including a simplified lighting model and some fallbacks implemented for Direct3D feature level 10.



Here are my scores with a GTX 660 and a HD 7970:

3DMark Cloud Gate score - GeForce GTX 660

3DMark Cloud Gate score - Radeon HD 7970



3DMark Cloud Gate

3DMark Cloud Gate


Fire Strike

Fire Strike is the new showcase DirectX 11 benchmark for high-performance gaming PCs. Using a multi-threaded DirectX 11 engine, Fire Strike includes two graphics tests, a physics test and a combined test designed to stress the CPU and GPU at the same time.



The Fire Strike engine includes all latest technologies: multithreading (based on DX11 deferred device contexts and command lists), tessellation, deferred rendering, surface illumination with combination of Oren-Nayar diffuse reflectance and Cook-Torrance specular reflectance or basic Blinn Phong reflectance model, volumetric illumination, particle illumination, depth of field, lens reflections, bloom, antialiasing (MSAA and FXAA) and smoke simulation.

Here are my scores with a GTX 660 and a HD 7970:

3DMark Fire Strike score - GeForce GTX 660

3DMark Fire Strike score - Radeon HD 7970



3DMark Fire Strike

3DMark Fire Strike

(Tested) Patriot Supersonic RAGE 32GB USB 3.0 Flash Drive

$
0
0

Patriot Supersonic RAGE 32GB USB 3.0 Flash Drive



Here’s a quick test of Patriot Supersonic RAGE 32GB USB 3.0 flash drive. I use this flash drive to save my daily coding work and for this particular task (zip files of several hundred of MB), the Supersonic RAGE is pretty fast.

Some pictures of the Patriot Supersonic RAGE 32GB:

Patriot Supersonic RAGE 32GB USB 3.0 Flash Drive

Patriot Supersonic RAGE 32GB USB 3.0 Flash Drive

Patriot Supersonic RAGE 32GB USB 3.0 Flash Drive

Patriot Supersonic RAGE 32GB USB 3.0 Flash Drive

Patriot Supersonic RAGE 32GB USB 3.0 Flash Drive



Now a quick benchmark with CrystalDiskMark 3.0.2 (64-bit):

Patriot Supersonic RAGE 32GB USB 3.0 Flash Drive - CrystalDiskMark test

The specs claim: up to 180MB/s READ and up to 50MB/s WRITE. The CrystalDiskMark test shows us that
the specs are true: 193MB/s in reading operations and 48MB/s in writing.



Update (2013.03.14): ATTO Disk Benchmark
I also added this test with another popular disk utility: ATTO Disk Benchmark v2.46.

Patriot Supersonic RAGE 32GB USB 3.0 Flash Drive - ATTO Disk Benchmark v2.46 test


With ATTO, the transfert rate is around 45MB/s in writing and around 190MB/s in reading.



And for the fun, here’s the CrystalDiskMark test of an 8GB USB 2.0 flash drive from EMTEC (tested on the same machine):

EMTEC 8GB USB 2.0 flash drive - CrystalDiskMark test



The french version of this article is available here: .
Test: Clé USB 3.0 Patriot Supersonic RAGE 32Go
.

Viewing all 52 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>