I have an problem withg this graphics card, when i put it on dvi with my monitor its normal and i put it on with my hdmi cable my montor pink view. Any solutions for this problem?
I have try to connect another hdmi cable same problem!
|Размещено 21 апреля в 08:03||Просмотров: всего 881, сегодня 4||Редактировать Снять с публикации|
Описание объявления «Видеокарта AMD Radeon HD 7900 серии 3Гб»:
Вид товара: Комплектующие Видеокарты
Самая быстрая и универсальная видеокарта в мире1
Откройте для себя невероятное качество игр в высоком разрешении с видеокартой AMDRadeon™HD 7900 серии для настольных ПК. Потрясающе.
Поддержка Microsoft Windows® 8 и DirectX® 11.2
Благодаря архитектуре GCN производительность поднимается на невиданный уровень, а качество картинки становится выше всяких похвал
Полное погружение в мир игр с многоэкранной технологией AMDEyefinity
В основе самых совершенных и универсальных видеокарт лежит принципиально новая архитектура1
Полное погружение в мир игр на нескольких мониторах2
AMD App Acceleration
Просмотр фильмов, видео, игр и приложений в формате HD с невероятной скоростью и производительностью3
Технология AMD HD3D
Откройте третье измерение с компьютерами с поддержкой режима 3D4
Вся мощь нескольких видеокарт для экстремальных 3D-игр в высоком разрешении5
Раздвиньте рамки возможного с динамической производительностью в играх6
Позволяет видеокарте AMDRadeon™ практически не потреблять энергию в режиме простоя7
ЧАСТОТА ГРАФИЧЕСКОГО ПРОЦЕССОРА 850 МГц (макс. 925 МГц с ускорением)
ПАМЯТЬ 3 ГБ GDDR5
ЧАСТОТА ПАМЯТИ 1250 МГц (5, 0 Гбит/с GDDR5)
ПРОПУСКНАЯ СПОСОБНОСТЬ ПАМЯТИ 240 ГБ/с
ОДИТЕЛЬНОСТЬ ПРИ ОБРАБОТКЕ ЧИСЕЛ ОДИНАРНОЙ ТОЧНОСТИ 2, 87 терафлопс
(ОДИН ВЫЧИСЛИТЕЛЬНЫЙ БЛОК РАВЕН 64 ПОТОКОВЫМ ПРОЦЕССОРАМ)
28 вычислительных блоков (1792 потоковых процессоров)
112 текстурных блоков
128 блоков операций растеризации z-буфера или шаблонного буфера (Z/Stencil)
32 блока растеризации цвета (ColorROP)
Два блока для обработки геометрии
Два ядра для асинхронных вычислений (ACE)
The specifications of AMD’s next generation of graphics cards seem to be there. The HD 7900 series (HD 7950, HD 7970 and HD 7990) will be certainly launched in Q1 2012 with 28nm transistors. The GPU, codenamed Tahiti (PRO and XT) will use a new architecture called GCN for Graphics Core Next (the current HD 6900 is based on a VLIW4 architecture). The other important change of the HD 7900 series, is the presence of the XDR2 memory in place of the GDDR5. The XDR2 memory is twice faster than GDDR5. The Radeon HD 7970 will have 2048 shader cores (1536 for the HD 6970), 128 texture units and 64 ROPs. And the power consumption will 190W, 60W in less than the HD 6970.
But before releasing the HD 7900, AMD will produce the GPUs for the HD 7800 series (HD 7870, HD 7850 and HD 7650) planned for the end of 2011. The Radeon HD 7870 will be powered by the Thames XT GPU (28nm, VLWI4 architecture, 1536 shader cores) and will be available with 2048MB of GDDR5 memory.
24 thoughts on “AMD Radeon HD 7900 Possible Specifications (XDR2 Memory, AMD Graphics Core Next)”
Interesting move. Considering that nVidia can barely get GDDR5 to run at a recent speed with their shitty memory controller this will put them even further ahead in the game.
The 79XX won’t be cheap. The XDR2 memory requires royalties.
Impressive numbers, XDR2, 28nm, lower powered, DX11.1, PCI-E 3.0…well thats hammer of Thor not just GPU.
DrBalthar talking shit again i see, nvidia’s controller can hit 5.2Ghz quite easily.
Gotta love AMD recycling old architectures with a new model number… 68xx, now 78xx…
Nice paper launch.
HD6970 vs GTX580
2.64 billion vs 3 billion
880 MHz vs 772 MHz
Stream Processors / CUDA Cores
1536 vs 512
2.7 TFLOPS vs 1.58 TFLOPS
96 vs 64
84.5 Gtex/s vs 49.4 Gtex/s
32 vs 48
28.2 Gpix/s vs 37.1 Gpix/s
2 GB GDDR5 vs 1.5 GB GDDR5
1375 MHz vs 1002 MHz
176 GB/s (256-bit) vs 192 GB/s (384-bit)
AMD trying to fool customers with high numbers.
@Anon – now add Price and then Perf/Price 😉
@Promilus: now get a job. 😉
@Anon – I already have one 😉 And the purpose of money is not to spend them… it’s rather to spend them wisely 😛 Of course GTX580 > HD6970 in pure performance on stock clocks but not everywhere and not always 😉
These numbers look too good to be true for the low-end cards. The Radeon 7570 with 16 ROPs, and Power Consumption of 50W. Similar spec’ed cards for the past generations ( 4600, 5500 series) have been using 8, with NVIDIA starting to use 4 ROPs on their 430,440,520 series. Sure, it is possible that this improvement can be attributed to the 28nm process, but I’d be surprised if these cards are priced in the $80-100 range.
But when we talk about driver update reliability and on-game crashes everybody avoid ATI.
I expect this will change in the near future, do it with AMD chipsets at least could be a good start point.
@:pr0or1337: never had a crash issue due to bad drivers in Ati but was riddled with them via nVidia. Perhaps it is because I run beta drivers for both unless no recent beta is available then run WHQL.
Why avoid, if game is opened drivers can’t be updated, simple isn’t it, update your self or disable auto-update.
According to documents circulating around the Internet, only the high-end GPUs will be manufacturer using the 28nm process. The lower-end GPUs will be manufactured using the 40nm process. Thus, expect cards with similar specs as AMD’s past 2 generations in the low-end range (definitely not 16 ROPs).
I hate when people try to sell advice on only their own experience.
@pr0or1337 but people have the opposite effect, and it’s probably something on your end or a defect with that particular card, not the whole line from amd(or even nvidia for that matter). also, game devs don’t always really optimize for amd..
Thanks for posting, I have looked for AMD roadmap and didn’t find anything. I’m going to upgrade my Radeon 4850 but Radeon 6950 or 560 Ti and even 6970 and GTX 580 which I looked at, are weak in today games like Metro 2033 so I wanna something more powerful. I’m gonna wait 7900’s
@Nuk3d – most “optimizations” in games for specific vendor aren’t done by tweaked game engine but by tweaked drivers. Thanks to partnership program nv has early access to game builds and can optimize their drivers to run it smoothly even BEFORE it’s released. Now some few months after release AMD gives tweaked drivers with perf. boost in that title while nv can’t boost it up more. Of course now you can give lots of obstacles… AMD is no good for tess of whole scene with high factors – kill performance on radeons by using it. NV is slightly worse with pixel shader post-processing – apply enough effects to get 6970 higher than gtx580… That’s how it works nowadays 🙁
2000 cores in a SIMM design is not going to happen at 28nm
It looks like a nice GPU. Too bad that ATI had so many bugs and incompatiblity in games due nvidia owning the whole market. Anyways I think is a nice step forward in performance. And this will make nvidia to do even more nice gpus. And finally they are taking in account the power consumption!!
And for all fanboys.. stop fighting. Each one can buy the gpu most like. I use nvidia because mot of the game are optimized and made for nvidia besides all features I get in 3D design using nvidia with cuda, physx, etc.. But if ATI could offer something better, for sure I choose ATI. ATI is nice and cheap, good for gamers without too much money. nVidia is more for gamer entusiast and people that want use teh gpu for more than just play games.
And remember that with more GPU vs GPU, more cheap and better products we have. So let they to fight to get some nices gpus the next year 😀
Wow the kinda stuff people spew about cards is ridiculous. I have used both companies products and their both great. That being said I am currently using AMD because after 6 Nvidia cards dying and 0 Ati/AMD cards dying I figured I was wasting my money. Not saying Nvidia is to blame because none of them were reference cards but still. Also, I have had issues with both cards drivers and lost several cards to those infamous 196.75 forceware drivers that were known to kill cards. Never had that issue with Ati/AMD drivers. Both companies have failed their customers in some way or another but sadly its a duopoly so pick your side and move on. They all perform similiarly at the same price points and give and take on features. Several things stand out from reading everyones comments.
1) Both companies have been known to reuse previous gen architecture so don’t even try and use that as an excuse against the other.
2) Most cards can’t handle DX11 worth a damn. Unless your at the High-End or running multiple cards your not getting ideal performance.
3) Games like Metro 2033 shouldn’t be used as a baseline for evaluating your next purchase unless you plan on playing it all the time. Its a great benchmark and looks amazing but its not realistic to use it to judge the future of performance in DX11 titles.
4)Physx is a zombie, Nvidia is moving away from it at least in the way we know it now.
5)The argument about Nvidia being better for things other than gaming makes me assume people are talking about OpenCL or general computing. Both companies have strong showings in different applications for example, Bitcoin favors AMD very strongly. Folding@home favors Nvidia very strongly.
6) Be happy there is more than one player in the Graphics card market. If there wasn’t competition consumers would suffer. You like Nvidia? Great! You love AMD? Fantastic. Just realize that both companies have their strengths and weaknesses.
Yay! Now if only game developers could release some games that could actually utilize the speed of these GPUs. Since every AAA game is a console port these days, I feel I will be safe with my 5870 for quite some time to come. Even my old 9800GTX can max out most new games even w/ antialiasing. The only reason you would want one of these is for multi monitor setups that have really huge resolutions like 5760×1080.
and nvidia trying to fool customers with high number of transistor but perform like sh*t.
lol, and nvidia 500 series were the only proper series in years from that company, and now to correct some fanboism:
gaming optimization has nothing to do with drivers, it has to do with your game, thats why so many multiplatform games work like crap on ps3 and then ppl say lulz ps3 sucks cause ps3 has less processing power, im not even bothering comparing the specs theyve been around for years and its more than clear in both flops & cpu specs which actually does better, also, if you stop for a minute and think at how dirty nvidia plays by even disabling features such as AA on nvidia games like batman as soon as an ATI card is detectedi think this statement clearer than ever, Nvidia is alot of hype and alot of crap talk, same drill on android.
Nvidia has idd higher performance in 580 vs hd6970 however nvidia AA and AF are still some sort of cheap blurring filters compared to ATI AA & AF.
The only card i actually disliked from ATI was the very first version of AMD 4870 available on market, it died on me so quickly because the vrm couldnt hold the gfx card power demand in order, reaching 93ºC on vrm while gpu was at 57ºC, it actually died on windows 7 installation.
The use of the old architecture is kinda bummer.