What's up with those rumors
-
Where’s my downvote button?
All the best, Uwe
Be sure you have the latest WINE (4.17) some changes:
New version of the Mono engine with upstream fixes.
Support for DXTn compressed textures.
Initial version of the Windows Script runtime library.
Support for XRandR device change notifications.
Support for generating RSA keys.
Stubless proxies support on ARM64.
Various bug fixes. -
I got it. Come on, just let’s finish it:
MS 5.0 = Win 10 AND Linux AND Mac, with everything else you like.
Does it fit for you this way? :lol:
With best laughing regards to all.
-
BMS 5.0 = Win10 + DX12, I can feel it… :twisted:
BMS 4.32 > BMS4.33 > BMS4.34 > …
BMS5.0 will be Win31 and DX108!
-
Well, I don’t have enough objective knowledge to deny that, but can you explain why so many people (even here, on these fora!) had to revert to a previous release after some Win10 updates?
I have no idea. I have had 0 issues with Windows 10 and I am using it since late 2015.
I’ll have to link that post for the date when MS announces that win10 is now available through a monthly subscription only. (A few years down the road still, but inevitable in MNSHO)
Also please remember that quite a few folks have had their carefully tuned setups blasted to bits by the forced auto-updates. I’d be very miffed if that would happen to my deskpit; must be even worse for the guys and gals running full-blown cockpits.
Linux desktops have improved in leaps and bounds over the last couple of years, BMS runs nicely on it using WINE (or at least did so until 4.34 which seems to have a few issues). Would be a match made in heaven: A sim fueled by passion running on a free / libre operating system also fueled by passion with a few big enterprise players (microsoft?) thrown in, with official support from the devs, and hopefully on a true cross platform API like Vulkan (one may dream :))
I don’t want to turn this into yet another windows vs. Linux debate. Technical merits aside, win10 still has many huge question marks hovering about it w/r to privacy & telemetry settings and the question who is actually “owning” the hardware you run it on. It’s been discussed here quite a few times and I was going to sit this one out here in this thread, but after your post I thought the other side of the scale could use some additional weight
All the best,
Uwe
Yes, that’s possible that Windows 10 (or 11) will be available as SaaS one day (I don’t like this idea at all).
Yes, they are things that I don’t like in Windows in general (like C:/Users/ <username>directory that was introduced in MS Windows Vista IIRC, fortunatelly there is workaround for this).
I am not going to criticize Linux cause Linux is kernel to me. It’s good. Even better it is for free. But Linux distros are another story. I am AIX and Linux sysadmin with several years of exp., I have few RH certs including RHCE, and I have to say that RHEL/Fedora/CentOS is a trash bin, especially after introducing systemd. Ubuntu is better, but still you have to struggle to accomplish even simple things. Only SLES and OpenSuse are kind of mature operating systems, mostly because they have YAST (but YAST is still worse than Windows Control Panel and not comparable to AIX SMIT).
Maintnance is a joke - left system running under normal user administration and you will end up with cluster**** - /boot 100% full, often not possible to remove old kernels with ‘tools’ like yum or apt.
Update/upgrade is a joke. - even Windows has some kind of restoration point like Windows.old directory (never need to use that though). I don’t dream about AIX features like alt_disk or multibos, but I will consider as plus everything else than 'run yum upgrade and pray".But what I like about Windows 10? It’s rock solid - literally 0 CTD since August 2016 - but honestly, I do not see ability of uptime of 1800+ days as plus, it is consistent (but could be better, hopefully one day all tools and menus will become ‘modern’) and almost idiot-proof. I have not had any problems with any of Spring and Fall updates (unfortuantelly I cannot say the same about updating my home OpenSUSE or Ubuntu or CentOS). Regarding telemetry, I used this setting:
• Basic diagnostic data is information about your device, its settings and capabilities, and whether it is performing properly. This is the minimum level of diagnostic data needed to help keep your device reliable, secure, and operating normally.
and I couldn’t care less :-)</username>
-
I have no idea. I have had 0 issues with Windows 10 and I am using it since late 2015.
Yes, that’s possible that Windows 10 (or 11) will be available as SaaS one day (I don’t like this idea at all).
Yes, they are things that I don’t like in Windows in general (like C:/Users/ <username>directory that was introduced in MS Windows Vista IIRC, fortunatelly there is workaround for this).
I am not going to criticize Linux cause Linux is kernel to me. It’s good. Even better it is for free. But Linux distros are another story. I am AIX and Linux sysadmin with several years of exp., I have few RH certs including RHCE, and I have to say that RHEL/Fedora/CentOS is a trash bin, especially after introducing systemd. Ubuntu is better, but still you have to struggle to accomplish even simple things. Only SLES and OpenSuse are kind of mature operating systems, mostly because they have YAST (but YAST is still worse than Windows Control Panel and not comparable to AIX SMIT).
Maintnance is a joke - left system running under normal user administration and you will end up with cluster**** - /boot 100% full, often not possible to remove old kernels with ‘tools’ like yum or apt.
Update/upgrade is a joke. - even Windows has some kind of restoration point like Windows.old directory (never need to use that though). I don’t dream about AIX features like alt_disk or multibos, but I will consider as plus everything else than 'run yum upgrade and pray".But what I like about Windows 10? It’s rock solid - literally 0 CTD since August 2016 - but honestly, I do not see ability of uptime of 1800+ days as plus, it is consistent (but could be better, hopefully one day all tools and menus will become ‘modern’) and almost idiot-proof. I have not had any problems with any of Spring and Fall updates (unfortuantelly I cannot say the same about updating my home OpenSUSE or Ubuntu or CentOS). Regarding telemetry, I used this setting:
• Basic diagnostic data is information about your device, its settings and capabilities, and whether it is performing properly. This is the minimum level of diagnostic data needed to help keep your device reliable, secure, and operating normally.
and I couldn’t care less :-)</username>
if the next release is only for Win!0 i can assure you 50% of BMS user will stick with 4.34 and there will be a demand of the source code and a new fork will continue…
-
if the next release is only for Win!0 i can assure you 50% of BMS user will stick with 4.34 and there will be a demand of the source code and a new fork will continue…
Right, because BMS users are too smart to upgrade and be taken in by that Win10 trap!!
-
Right, because BMS users are too smart to upgrade and be taken in by that Win10 trap!!
True. And, if BMS does go the route of DX 11 or 12, and all of those posters that want better GFX will get it. Then complain that they don’t want to upgrade there OS? Can’t have it both ways here. DX 9.0c can only do so much. Plus, I-Hawk has mentioned that DX 9.0c is a resource pig. So, If you want better GFX, be prepared to upgrade your system to take advantage of what we have been discussing. But again, this is only if BMS decides to go with DX 11 or 12 API’s.
-
Right, because BMS users are too smart to upgrade and be taken in by that Win10 trap!!
Hmm. Only company that do not declare their products as alpha/beta state when they start selling those with hard bs.
Win10 seems to matured enought and I have seen no problems with it now. I’ve used win7 as long as I can but win10 was only option for my new comp.
I just not understand that ms is trying to fix things that were not broken in older system with their new “inventions that saves world and cut your work to half” and reality is that I downloaded programs to fix those bugs they created.
-
I think you are comparing a bit of Apples and Oranges. Maybe you think Doom and ARMA can be compared but maybe they can’t, actually. The ARMA scene seem much more “reach” to my eyes. The Doom scenes you showed mostly have dark background, a few effects and some 3D models. I never played ARMA and last I played Doom was when I was 15 or so so I’m not really familiar with the overall GFX, But it seems like 2 very different animals. Also, there is a possibility that the ARMA engine is less efficient, regardless of API.
And… comparing ARMA to BMS is Apples to Potatoes maybe
It’s not really about API but could be many reasons why there is a difference between game engines. I know it’s popular to think today that Vulkan is the savior but look for example at this video, you will see that Vulkan and DX12 are pretty close:But yeah, as I’ve said… with DX11 as worst (besides unoptimized DX9)… in AVG frames and Vulkan - BEST.
So… What’s wrong with Vulkan? Too fast – It’s not about the game…doom/arma/bms… but possibility of gfx engine to ‘do the math’ and to do it FAST on gpu, so, about the speed of gfx engine .
Same sh** with programming , do the math in VB or C++ … which one compiles/run faster ? You get my drift.
It’s not ‘a blind guess’ of DCS that drives them to same conclusion… fix the graphics, raise enjoyable experience - get playability, … Win win situation. (well… fix bugs, that is)
So , i’m not comparing games , but possibility to run max details on different engines … (I’ll see if I can dig up some bench soft which includes all tech DX9/11/12/Ogl/Vulkan)
…
Cheers -
I don’t know enough about DX11 or DX12 to say which should be used. I mean, one would think DX12 would be the better goal, just by the fact 12 is a higher number than 11. But I leave that to the people who know what they’re doing.
With extended support for Windows 7 ending in in 3 months (January 14, 2020), meaning no more updates or security patches, I would have no problem with a move that required Windows 10.
-
Support for W7 ending just means we need some guides on how to run BMS under Ubuntu for total newcomers to Linux.
Applying the logic regarding 12 being a higher number than 11, I guess that means Windows 8 is better than Windows 7.
-
But yeah, as I’ve said… with DX11 as worst (besides unoptimized DX9)… in AVG frames and Vulkan - BEST.
So… What’s wrong with Vulkan? Too fast – It’s not about the game…doom/arma/bms… but possibility of gfx engine to ‘do the math’ and to do it FAST on gpu, so, about the speed of gfx engine .
Same sh** with programming , do the math in VB or C++ … which one compiles/run faster ? You get my drift.
It’s not ‘a blind guess’ of DCS that drives them to same conclusion… fix the graphics, raise enjoyable experience - get playability, … Win win situation. (well… fix bugs, that is)
So , i’m not comparing games , but possibility to run max details on different engines … (I’ll see if I can dig up some bench soft which includes all tech DX9/11/12/Ogl/Vulkan)
…
CheersWell… sure it seems like Vulkan is very good, but what I’m saying is that I don’t really think it has any considerable advantages over DX12 besides the fact that it is multi-platform. Performance wise Vulkan and DX12 looks the same, and I don’t know Vulkan or OpenGL very good but I think I read somewhere that DX has more support and that it might be easier to handle coding wise (But that isn’t confirmed!).
But of course, no doubt, I assume if we could go Vulkan we would have done that instead, but the reality is that it won’t happen (at least not in the close 3-4 weeks upgrade).
Also, one more thing that I already mentioned before - People tend to give too much credit to the API, in my eyes. Eventually shaders are pretty simple programs (For sure compared to C++ heavy OOP style, shaders are mostly procedural style programming, more like simple C than C++), and if written with best optimization possible, the GPU will run them in same/similer speed not really matters the API.
For example:
Even on DX11, I’ve seen the shaders compiling is VERY efficient. Specific example may be that the compiler will remove any crap that isn’t needed at runtime even if you are dumb enough to leave it in the code, I’ve seen it happening. It’s quite nice to see that the compiler is smart enough to figure that if say some variable is eventually not used in say a pixel shader, then it will skip all the instructions involved with that variable without even bothering you. Like saying: “You are a fricking idiot, so I’ll just fix it for you”I think, that if your engine is efficient enough and you keep the GPU busy close to full utilization, then I honestly don’t know how much a heavy multi-threading API like DX12 and Vulkan will give you more boost than say DX11. I hope you see what I mean.
From my (Relatively short) experience with CPU/GPU I think I can safely state that today’s CPUs are pretty fast so any rendering involved task is done pretty quick by the CPU (I mean specifically to stuff like updating a dynamic vertex buffer or calculating world matrices or doing some other vectors/matrices involved math, and of course updating GPU state and executing commands), so the main problem of inefficiency in my eyes is for engines which aren’t built efficiently from the start --> BMS engine, very good example! Rendering the terrain with ~1000 draw calls just because there are many textures, that is insane! And also running a draw call for EACH “part” of a 3D model, that is also insane! That’s where you start to wear the CPU speed.In simple words:
Make sure you don’t let the GPU to breath, keep it busy by executing HUGE geometry with as much as less draw calls as possible --> Your engine will be fast, even with DX11 -
In simple words:
Make sure you don’t let the GPU to breath, keep it busy by executing HUGE geometry with as much as less draw calls as possible –> Your engine will be fast, even with DX11Something like this, I guess
-
I-Hawk, your screenshot looks strange, almost completely black. Is it North Korea at night?;)
BTW. what is that tool for measuring performance? Looks like *nix nmon.
-
Nvidia-smi (Or in short nvsmi)
https://developer.nvidia.com/nvidia-system-management-interfaceIf you have an Nvidia GPU then you can run it from command line. By default it should be under your “C:\Program Files\NVIDIA Corporation\NVSMI” path. Just run it with a -l switch for getting a continues read:
nvidia-smi -l
Windows task manager isn’t as accurate (At least according to my experience), Process explorer is much better but NVSMI is more dedicated read from Nvidia so I trust it the most.
-
My apologies in advance to you if I ask, I-Hawk, but just for my curiosity only: why, to you, that nVidia tool seems fitting well to their professional video cards only, and just a little to their ‘consumer’ ones, judging from the step of the illustrative notes about the product support?
Thanks a lot and, in case, sorry for bothering.
Wuth best regards.
-
Ha not sure what they mean, possibly to more advanced support. But anyway AFAIK it is a reliable tool for inquiring the GPU utilization, Memory load etc. For me its good enough
-
Jackal
Its just a SMI interface util… nothing special , … all Overclocking Utilities use this kind of approach (even more complicated … direct input to registers) when reading/writing to the gfx card , … also its useful for monitoring applications, since … direct… yesIn simple words:
Make sure you don’t let the GPU to breath, keep it busy by executing HUGE geometry with as much as less draw calls as possible –> Your engine will be fast, even with DX11nuff said. that’s right ‘au contraire’ what am I seeing now in BMS … Gpu usage ‘jumps’ 0 - 90/100 every 1-3seconds,. while Cpus(threads) are ~<50% , so …(gfx) 'She’s not busy at all , … “when I’ll have sumting to do… I’ll do it … and IDLE”…
…but cpu’s are at 20-50% … doing… God knows what … but not much … SO THAT IS MY PROBLEM … wtf is going on? … (system is more, less optimized, nor cpu/gpu hog themselves)When I re-encode video … I get nice 100% on gpu and/or cpu (or both with OpenCl) … but BMS is working in “spikes” … I would really like just to take a look inside (under the hood) once , maybe I can break something
Any ideas?Don’t get me wrong… I would gladly see BMS in DX12 (even slower DX11) then in DX9 , sheesh, we all … (but if I could choose , would rather try vulkan approach… I’m not pushing it , just saying)
Cheers
-
Jackal
Its just a SMI interface util… nothing special , … all Overclocking Utilities use this kind of approach (even more complicated … direct input to registers) when reading/writing to the gfx card , … also its useful for monitoring applications, since … direct… yesnuff said. that’s right ‘au contraire’ what am I seeing now in BMS … Gpu usage ‘jumps’ 0 - 90/100 every 1-3seconds,. while Cpus(threads) are ~<50% , so …(gfx) 'She’s not busy at all , … “when I’ll have sumting to do… I’ll do it … and IDLE”…
…but cpu’s are at 20-50% … doing… God knows what … but not much … SO THAT IS MY PROBLEM … wtf is going on? … (system is more, less optimized, nor cpu/gpu hog themselves)When I re-encode video … I get nice 100% on gpu and/or cpu (or both with OpenCl) … but BMS is working in “spikes” … I would really like just to take a look inside (under the hood) once , maybe I can break something
Any ideas?Don’t get me wrong… I would gladly see BMS in DX12 (even slower DX11) then in DX9 , sheesh, we all … (but if I could choose , would rather try vulkan approach… I’m not pushing it , just saying)
Cheers
BMS engine currently is… as I explained here:
https://www.benchmarksims.org/forum/showthread.php?31216-AMD-RyZen-Build&p=513824&viewfull=1#post513824 -
We are dealing with a 20+ year old game engine. The first priority is to clean up that engine and prepare for what current (possible future) technology will run BMS. I believe the DEV’s are already hard at work on this. That in itself will take some time. Optimize as much that can be optimized. Utilized multi core technology in this optimization. In other words, make sure the GFX language (API’s, ect.) run through a dedicated core. I for see the need for multi core CPU’s on the horizon for BMS. I think most (if not all) VP’s are using multi core CPU’s anyway. I-Hawk is 100% correct when assuming that BMS needs to shunt the GFX information through to the GFX card to keep the render sequence “full” so to speak. That will allow for much greater overhead room.
DX 11 verses DX 12. The final conclusion is that DX 12 has more tools to utilize than DX 11. DX 11.2 or 11.3+ has “Tiled Resources” (useful for terrain textures using less memory to render) , “Consertative Rasterizations” (collision detection), “Default Texture Mapping” (reduces copying images from CPU to GPU) and yet more tools from there…
https://docs.microsoft.com/en-us/windows/win32/direct3d11/direct3d-11-3-features
Those tools are not available in DX 11. Only DX 11.3+. So if your going to want those tools, you are looking at Win 8 or 10.
DX 12 adds more to the list from above. Also, there appears to be tools for Win7 and DX 12 from Microsucks. So if more rendering tools will make a difference in the models and terrain, then DX 12 would be the way to go….
https://docs.microsoft.com/en-us/windows/win32/direct3d12/new-releases
Either way, the more tools that are available to the DEV’s the better IMO.