Binoculars, telescopes and scoped rifles + Vision changes

First of all, my biggest and only real problem with Cataclysm is that I seem to only be able to see about 50 yards in any direction. While I understand that its simulating the fact that past about 50 yards, I wouldnt be able to tell exactly what an object on the ground is. I can assume the character can see much farther, by the way the map view works, but I think limiting the character to a tiny ring of vision is a bit ridiculous. I think Ive come up with a sort of middle ground between the tiny vision cone we have now and real-life vision.

Do away with the black area around the player. Let the vision extend in every direction as far as the vision cone in the map view goes. But have any items outside the current vision cone give the message “There is an item here” rather than the specific name. This way, Id be able to tell if there’s something off in that field over there, even if I dont know what it is. When shift+xing to look at a creature, it could say something like “Theres a humanoid there” for zombies and survivors, and “Theres a humanoid there, but its not human” for any bipedal monsters. “Theres a four legged creature there” for moose, deer, boar, etc, and “Theres a small four legged creature there” for dogs, cougars, foxes and cats. But things near a lot of cover or trees, as well as in buildings or bushes would be pretty much invisible

If this was implemented we could use binoculars to look around and offset your view, but be able to see things in detail (with a very small amount of time passing for each tile you move your vision, so simulate taking time to scan around with binoculars).

With something like this, proper rifle usage could become a reality too. All pistols could have their range extended by about 20%, making pistols shoot as far as rifles do now, and rifles could fire several dozen meters (assuming your marksmanship skills were high enough). Sniper rifles could let you fire at things a couple hundred meters away, using marksmanship and your rifle skill to hone in on a target. You could also take time to line up shots, making a shot that you took time on be much more accurate than a pot shot. Plus NPCs could use it eventually.

Imagine walking down a road and an NPC sniper starts taking shots at you from a treeline, but because he’s so far away that you cant see him, so you run behind a car and hunker down (which would also be a cool mechanic. Being able to sacrifice movement and visibility for extra cover when behind an object), then peek out between shots to scan with your binoculars for the shooter.

Id imagine that’d be really difficult to implement, unfortunately.

That’s not quite doubling the reality bubble radius. You’re asking for a radical increase in processing there and would likely require us to rebalance the game from the ground up.

Might well be better off building from an initial commit, even. :-/ Sorry.

I figured it’d be impossible to make, but still, thats what my ideal change to the game would be. Who knows, maybe someone will come along and figure it out.

I figured even if its impossible, its still worth suggesting, if only for the fact that it might spark other ideas for people.

The reality bubble isn’t simulating anything, it’s a performance restriction. For every square of extra vision, it increases how much work the game has to do a great deal, and for not that much benefit (on a per-square basis).
Doubling it would literally increase the processing requirements of the game tens to hundreds of times.

Today I learned something.

I suppose it makes sense. Thanks for the explanation.

Doesn’t mean this isn’t a bad idea however. I really wouldn’t mind the game running slowly, but when I had a really poopy laptop I was glad that the game was so… hardware intensive.

There’s the problem, Ninja. One reason folks like myself get into roguelikes is that they generally are NOT system-intensive. No fancy graphics means no need for an expensive graphics processor. :wink:

Why does reality bubble’s size impact performance so much?
Are there operations performed on empty tiles or is it just a matter of including more active objects (mons, items, fields etc.) and tracing more rays?

Could (hypothetically) bubble’s size be changed from a compile time constant to a configurable variable?

On a totally unrelated note: could ray tracing feasibly be made parallel (if it isn’t already)?

[quote=“Coolthulhu, post:8, topic:7813”]Why does reality bubble’s size impact performance so much?
Are there operations performed on empty tiles or is it just a matter of including more active objects (mons, items, fields etc.) and tracing more rays?

Could (hypothetically) bubble’s size be changed from a compile time constant to a configurable variable?

On a totally unrelated note: could ray tracing feasibly be made parallel (if it isn’t already)?[/quote]

Because everything that happens in the game happens in that reality bubble.

Let’s break down the big ones real quick.

Scent diffusion: This is a multidirectional iteration over the area, since the value of each square is determined by the previous state of each surrounding square and the square itself. This one is really gnarly since you have to build in both backtracking and readahead, which does bad things to caching. We want to add in multiple scent maps, but that’s not happening unless we can get an algorithm that is many times as fast as the one we have now, and enlarging the active area would just make that worse.

FOV: This has a “better” main algorithm now, which means we “only” iterate over the entire map 2-3 times, for transparency, field of view calculation, and lighting. The real killer here is that to derive each of these we have to check many different pieces of data, off the top of my head terrain, furniture, vehicles and fields. Also keeping in mind that vehicles and fields can have multiple components per square.

Tile rendering: This also scales with the number of squares displayed, though at least it’s only with the number displayed instead of the total. A lot of the features we’ve built into the tiling engine have a real cost, like season-specific tiles, fallback tiles, rotations, and multitile support. Each of these features can trigger multiple lookups for the right tile to display. This is mitigated somewhat by a tile cache that was added a month or so ago, but it still slows the game to a crawl on my system if I configure it to have all the tiles visible at once.

Monster AI: Our AI might generally be dumb, but it still has to check its surroundings to figure out what to do. I was doing performance testing earlier tonight with 10,000 zombies on screen, and was able to ditch some major bottlenecks and get the effects of so many zombies interacting mitigated, but it’s still a hell of a lot of calculations happening. Increasing screen size means increasing number of monsters that will be onscreen.
The monsters get to use the scent map for “free” because it’s calculated for them, but they have to do their own LOS, so the cost to compute whether they can see something scales with the distance to the target.
Likewise NPCs have to do LOS the hard way, because they have to just draw lines and see if it intersects with anything.

In the first one I’m debug-invisible, so only the ones that can hear me punching their friends are coming after me.

Made myself visible, and now most of them are trying to pile on.

Fields get updated every turn, and more squares means on average more fields. fire is especially bad, since it rummages around in the items under and around it.

Active items also scale with active map size, and they’re what dominate processing time when operated in any kind of accelerated mode right now (waiting, sleeping, reading, crafting). there are some serious mitigations we can do for active items, but tracking when all the food rots properly and when zombies get back up again and when dynamite blows up isn’t free.

Does food rotting check need to be done every x turns? Could it be done retroactively, like with food outside the bubble?
Instead of updating food when it’s around, update it before it is used for a calculation (player looking at it, fire consuming it etc.). Otherwise it would have just a value “update in this turn”. Active items could be sorted by their update turn and the list could be processed only partially, breaking out at the first item with update_turn > gametime.
It could also work for corpses.
Anything wrong/unimplementable with my idea?

Why do drugs/permafood have rot timer updates (at least DEBUG mode says they do)? Do they just not contribute enough to the slowdown to matter?

Once again with the parallelism: from what I’ve heard, calculations like this spreading fluids parallelize really well. If it’s a simple cellular automaton, then there are no race conditions and it could be really easy to implement and very good for performance on modern CPUs. I’ve also read that C++11 supports parallelism as a core feature.
Are there any objections to parallelism or is it just a result of people being occupied with more important things?

As for LOS calculations and many mons: could it be sped up by informing the mon that it is visible to the player? If it is, it could then only check the distance and is player’s tile lit, making big visible hordes less CPU-intensive.

PS. I may have discovered a tiny bug: process_corpse either revives corpse or makes it inactive. revive_corpse can fail if there are creatures on the tile, but it explicitly states “try again later”.

I said,

there are some serious mitigations we can do for active items,

In more detail, these two are the immediate mitigations for the active item updating.

Updating food when it’s looked at is extremely error-prone and scatters calls to handle food rot all over the code where it doesn’t belong. It’s not a tenable situation to maintain. The other wrinkle is that rot in particular has a varying rate based on its environment, so you can’t just calculate when it needs to happen.

Permafood does not get processed, that’s an artifact of the debug code.

Parallelism doesn’t help a bit if the only systems where performance is an issue is legacy systems with one core. If you have a quad-core i7 running at 3GHz, you aren’t encountering any problems anyway, it’s the single-core P4 systems I’m more concerned about, or ARM, or a EEE machine (which I have, and want to continue to play DDA on). The extremely bad thing about that sort of thing is that it makes the program slower on single-core processors, so it’s a total loss.

C++11 has some parallelism features, but I certainly wouldn’t call it a core feature

Informing the monster that it’s visible from the player is unhelpful because they have different vision modes. The player might be peeking through something or in hiding or any number of other situations where vision isn’t symmetric.

OFT: I’ve always admired those skilled OGL programmers, fully aware of the fact what optical aids do (regarding vision) and how to process the game data which isn’t all that visual. To put it in other words, it would be a real bore to find out that items/foes you’ve scouted for aren’t really there when you arrive on spot.

Thanks so much for the explanations. Its really interesting to see how vision works as a mechanic.