What's up this week? 11/1/12

Discussion in 'Planetary Annihilation General Discussion' started by neutrino, November 2, 2012.

  1. doompants

    doompants New Member

    Messages:
    17
    Likes Received:
    1
    Woot! Welcome to the team Sorian! And Elijah.. gotta love seeing such established names joining the roster.
  2. doctorzuber

    doctorzuber New Member

    Messages:
    252
    Likes Received:
    0
    I just want to say real quick that I respect and admire you guys for how willing you are to be this transparent about your development process as you work. This level of feedback to the community is certainly rare.

    I think combined with the concept of crowd sourcing, and the rising popularity of indie development in general that you could be setting some very interesting precedents here in how you are doing things. In other words . . .

    You're doing it RIGHT!
  3. RaTcHeT302

    RaTcHeT302 Guest

    Can't wait for more news.
  4. neutrino

    neutrino low mass particle Uber Employee

    Messages:
    3,123
    Likes Received:
    2,687
    The answer to this question could be a blog post. I'll try and take a quick stab here.

    Short answer is yes. You do pay attention to optimization from the beginning. There are multiple levels and types of optimization though.

    The first type of optimization is figuring out what problem you are actually trying to solve. I suppose this is a meta-optimization. What's the granularity of your simulation? What are your overall design paramaters? How big/how much and how do the pieces fit together? In our case most of these questions are fairly easy but I can tell you that William and I went back and forth on requirements quite a bit during the initial architecture work. For instance I pushed really hard on the replay system and game scalability in ways that radically changed our thinking. All of this top level design is the form of optimization I'm talking about. We make natural assumptions and tradeoffs during this phase that greatly affect the problem space we end up working within for other optimizations.

    Once you've got a basic engine architecture the next phase of optimization is picking smart algorithms. This is where all of the big-O notation stuff comes out. We've been looking at a data structure called a skip list for example to use to store the curve data. In this day and age you also want to pick algorithms that can more naturally be exploited for concurrency.

    Then once you've picked a decent algorithm you want the implementation to be fast and bug free. Premature optimization at this phase can make code hard to read and waste time before you even know how the code fits into the whole. I try to write first pass code that's "plausibly shippable" which is my bar for something I don't need to touch again before shipping. This means its "done". However, in many cases I do write simple versions of stuff quickly that does have bad scaling characteristics. This usually goes away quickly.

    There are a million ways to optimize a particular piece of code. Data design with cache access in mind is quite important. Rarely do we do something like write asm code though. Things like cache pre-fetching I definitely do in some cases (although it makes less and less of a difference on new processors). I also pay attention to branch misprediction type issues in places where it might matter like inner loops.

    We have our own built in profiler and event catcher than we can use to get some quick timings during development.

    As much as possible yeah. I've already made some pretty big planets for example and I typically run scenarios with different sized planets to see how performance is scaling (I want memory and perf to scale linearly or better with surface area and unit count).

    The rule of thumb on scalability is that every time you scale by 10x you hit another place where you need to fix something. My current gut feeling is that server perf is going to be limited by available bandwidth more than CPU.

    I can't remember exactly what we fixed in that patch. Possibly we replaced the memory allocator as we had a ton of small block allocations going on. Or it could have been some graphics lod stuff or something, not sure.

    This is always a danger. It's one of the main reasons I try to just write the code to a shippable level of quality as I go. Pick the right algorithm and make a passable simple implementation and you are golden most of the time.

    Generally speaking you can optimize one part without causing any issues. Although in a lot of cases more global optimization can get you more perf. So in this case you might start breaking down walls between units to get the extra perf. A lot of this comes down to good upfront data/algorithm design though.
  5. grimmstaman

    grimmstaman New Member

    Messages:
    7
    Likes Received:
    0
    Good god that was a long post, Neutrino! But its super fantastic that you took the time to make it. It says a lot about you and Uber.

    On a seperate note, I'm super psyched about PA and all the updates between now and launch! :D
  6. doud

    doud Well-Known Member

    Messages:
    922
    Likes Received:
    568
    Thank very much Neutrino for this very detailed and comprehensive answer.
    Not only do i enjoy playing RTS, but being fascinated by how all this comes into place, i really appreciate that you take time to share with us the background. I can still remember the very first time i put my hands on supcom and started to zoom out/in. I could not believe it, so many units moving and firing at the same time and this amazing ability to smoothly zoom in/out. Damned, this was pure magic. And even much more than playing the game, i was trying to understand how this could work. You can't imagine how many games i have lots just because i was so fascinated by the zoom functionality that i spent my time moving the camera, zooming in/out, trying to make it hang. No way :lol:

    Thanks again, i guess i will have soon or later other questions :)
  7. terrormortis

    terrormortis Member

    Messages:
    68
    Likes Received:
    1
    Neutrino,
    I'm a mechatronic student and I'm wondering how you represent unit/building/projectile position in your engine.
    I assume each object has a position, orientation and velocity vector?
    Do you plan to use seperate coordinate systems for each celestial body?
    Just wondering since I'm working a lot with mobile robots...
  8. Gabberkooij

    Gabberkooij New Member

    Messages:
    20
    Likes Received:
    0
    When developing software you can divide the code in pieces and provide them with some input. After changing the code the result of those code pieces should be the same. This is called unit testing and a good way to make sure that changes do not break your software.

    Depending on the piece of code it is difficult or easy to create those unit tests.

    I do not know if the Uber team uses this technique. But there are good tools that helps to qutomate the process, so you can run those checks after compilation or a daily base.
  9. neutrino

    neutrino low mass particle Uber Employee

    Messages:
    3,123
    Likes Received:
    2,687
    Position, orientation and velocity are used in some places. We use Quaternions to represent rotation.

    We call the coordinate system "celestial coordinates" for the planetary systems. Then each planet has it's own planetary coordinates system centered on the center of the sphere the planet is made from. So 0,0,0 is the center of the planet. Z-is "up" towards the poles.
  10. neutrino

    neutrino low mass particle Uber Employee

    Messages:
    3,123
    Likes Received:
    2,687
    We uses the google test framework for unit tests. I also write a lot of code to test systems as we are developing them. Unfortunately the more high level the system the harder it becomes. It's also challenging to write unit tests for graphics stuff. Using the AI to drive the systems can be helpful as we can run through games more quickly than humans.
  11. supremevoid

    supremevoid Member

    Messages:
    340
    Likes Received:
    0
    Hey Neutrino

    Do you make a topic about What´s up this week? every week from scratch?
    I could love you for your activity at the forum.I never saw any other developer who did this.THX
  12. terrormortis

    terrormortis Member

    Messages:
    68
    Likes Received:
    1
    thanks, it's really awesome to get questions answered by devs ;)
  13. MasterKane

    MasterKane Member

    Messages:
    81
    Likes Received:
    7
    I have a question about computing resource management. If I properly understood, PA uses client-server model. Regardless of implementation, it have tow major issues:
    • Mentioned state transfer problem, with 20 mbit/s may be not enough, and unit response time of input lag + ping * 2 + frame simulation time.
    • Unbalanced computing resource management, with server bearing insane load and clients recieving state, rendering graphics, and sending input, which a low load for modern CPUs. And considering simplified graphics, any top-level GPU or multi-GPU system amongst clients will not be under heavy load.
    So the question is, can PA use advanced distributed computation technique like real-time grid computing, thus utilizing spare client CPU/GPU time to aid server? Some part of it will be more or less present if simulation is going to be multi-core, and its possibility is a matter of selection of distributed processing over widely-used shared memory multiprocessing model.
    Pros:
    • Simulation can use summary resources of all CPUs and GPUs (if GPU acceleration of simulation through OpenCL ar similar tech will be present), which is a lot more than server itself even with overhead.
    • If client will help server to process simulation of areas currently visible in game to the corresponding user, an optimisation of transfer is possible - after reciving simulated area data, server is not required to propagate it bak to the same client. Since networch channels tend to have same upload/download rate, it lowers incoming bandwidth requirement.
    Cons:
    • Somewhat difficult to implement responsive load balancing.
    • Without any spare client resources to aid server, overall performance will be slower due to overhead on data transfer between simulation processes. This should be a rare case though.
  14. eface60

    eface60 New Member

    Messages:
    4
    Likes Received:
    0
    Did he just say they've already got a workable AI?
  15. Causeless

    Causeless Member

    Messages:
    241
    Likes Received:
    1
    I doubt it. He's probably referring to the future.
  16. neutrino

    neutrino low mass particle Uber Employee

    Messages:
    3,123
    Likes Received:
    2,687
    You are sort of describing how TA worked. It used a pure async model with no server. Every machine calculated waypoints for their own units and sent them to the other player. So you did get a distributed load.

    There are a number of issues with moving work from the server to the client. For example if we are bandwidth limited which seems plausible this would simply make the situation worse. It could also only be used for calculations with a very long latency time which is actually fairly unusual. Grid computing is typically used in cases where latency doesn't matter.

    There is also a very specific goal here to decouple simulation performance from the clients as completely as possible so that we can scale up the server boxes for huge games.

    As you pointed out this frees up framerate for the client as well. This is a good thing IMHO.

    There is also the issue of cheating which is going to be much easier to prevent in this setup...
  17. nickgoodenough

    nickgoodenough Member

    Messages:
    52
    Likes Received:
    0
    Will ÜBER be hosting these powerful servers? And will we need to pay to play on them? And what plans are there for a match making engine?
  18. neutrino

    neutrino low mass particle Uber Employee

    Messages:
    3,123
    Likes Received:
    2,687
    It's too early to really answer these questions. We don't even know if huge games are going to be fun or not at this point. We'll be running some servers but I expect the community to run the vast majority of them.
  19. MasterKane

    MasterKane Member

    Messages:
    81
    Likes Received:
    7
    If so, will a top PC be able to run a 40 player game server with most of players being AIs and a client at the same time? Dedicated hardware for servers is not generally affordable for community, especcially enterprise-grade one (like a behemoth with 4 SBEP/IBEP Xeons and 256 Gb of RAM), so servers will most probably deployed on home PCs.
  20. thapear

    thapear Member

    Messages:
    446
    Likes Received:
    1
    Why do you ask this question, they don't even have a working game yet... Hell, they're still in the concept stage for parts of the engine. How on earth would they know this now.

Share This Page