Unity general questions

Comments

  • The two models are not mutually exclusive, and neither are perfect. And none gives performance gains over the other - it's about the softer benefits. Composition just provides additional benefits of extensibility, and some agility if you needed to change your code base, while inheritance poses some restrictions once your code reaches a certain level of complexity. I normally restrict my use of inheritance purely to re-useable base objects nowadays (for example, overriding gameobject to substitute my own code on top of it), as the cost of additional complexity and time spent refactoring gets quite expensive in the long run.

    But, like I said, there's no right or wrong. Feel free to google a little bit - you'll get much better detail on the subject than I could give you atm (on my phone - not the best productivity machine).

    Just btw, regarding my overriding gameobject example - c# also allows extending classes laterally - have a look into that, as it is also extremely useful. Basically, you can "inject" your own functionality into any existing class...
  • tbulford said:
    If you separated would you be considering forgoing MonoBehaviour all together? I have seen many recent instances where the MonoBehaviour model consumed far too many resources for what I was trying to do in it. I am not sure if that's just the nature of the platforms I am looking at or if its something to do with how it called the methods themselves.
    The big value of monobehaviors is the built in editor support in Unity, they're ideal for allowing artists/designers to compose objects and set default values. It's possible to keep that though (I did a small test last week), while separating out game logic into Unity independent objects. One of the big benefits here (which I sorely miss in Unity) is better control over execution order. So for example something like the following would still allow your objects to be designed in Unity, but passed into your engine that could just run on a persistent object in Unity:
    [Serializable]
    public class NonUnityObject : GameEntity
    {
    	[SerializeField]
    	private int someField;
    }
    
    public class NonUnityObjectComponent
    {
    	public NonUnityObject;
    
    	void Start()
    	{
    		GameEngine.AddObject(NonUnityObject);
    	}
    }


    You could also just use this pattern for setup objects, and create the actual objects yourself, which would help deal with dependence on core Unity objects with some kind of wrapper layer.
  • Did anyone post this link here already. Just in case

    http://devmag.org.za/2012/07/12/50-tips-for-working-with-unity-best-practices/

    This is a great list and I see some redundancy in answers here and this list even cooler its from our own @hermantulleken on the forums.
    Thanked by 1mattbenic
  • Hehe, yeah, it's been posted here. And if I'm not mistaken @hermantulleken owes it an update (by his own admission)
  • @mattbenic yeah that's more or less what I was considering. I am also evaluating a full auto-code-ed state manage solution for the AI I am building. Since the states are easier to simply output code for then build composites that read data in. Not sure where that will end. The execution order is a real pain, Although mostly for start up then for general execution in our case. It would be great to swap notes to better see where execution order is causing your issues. Perhaps there is a fundamental difference in approach we could both learn from.

    In our case we have poly-morphed MonoBehaviour and used FixedUpdate mostly due to legacy with Toxic Bunny. New games only use Update. The OO tree in Toxic Bunny is not too deep since I was 19 when I started writing it originally so its MonoBehaviour->Sprite->Monster->WalkingMonster->Rat for example.

    Then there is a RatInit that takes all the initialization parameters this was more a legacy choice then preference for now. But it does a full scrub so we can easily reuse objects. All in all its worked out OK. For the newer games we have followed a similar model although less polymorphism and more interacting scripts that cross talk to achieve a similar goal.

    The thing that bugs me is the dependency on Unity. So little code could be removed and used elsewhere. Always before I have managed to remain fairly neutral but the shear volume of interfaces and parts I would have to build to facilitate that just don't seam worth it. We currently looking at breaking dependance at a higher level. So walks here aim there execute the following animation. As apposed to put it here facing there. Its a long exercise.

    Apologies if that was all a bit long winded and boring, its probably more fun to talk about then read about.
  • Sooo I'm trying to do the GetComponent call once only and it's not working. Can someone point out the error in my ways... Or is it actually not possible (because I'm instantiating at run time, I don't know how you'd lock down a reference at startup if the reference is a new one for every instance?)

    Variable declarations
    
    	public Sprite startingSprite;
    	private GameObject newBlock;
    	private SpriteRenderer newBlockSpriteRenderer;
    
    	void Start () {
    		newBlockSpriteRenderer = newBlock.GetComponent<SpriteRenderer>();
    	}
    
    	void Update () {
    			//skipped conditions for instantiation for brevity
    			//it doesn't really instantiate every step.
    			GameObject newBlock = Instantiate(originalBlock,
    			                                  new Vector2 (-8 + blockColToMake, 11),
    			                                  transform.rotation)
    							as GameObject;
    			newBlockSpriteRenderer.sprite = startingSprite;
    	}


    The resulting error at runtime (it compiles and runs)

    NullReferenceException
    UnityEngine.GameObject.GetComponent[SpriteRenderer] () (at C:/BuildAgent/work/d3d49558e4d408f4/artifacts/EditorGenerated/UnityEngineGameObject.cs:28)
    GameController.Start () (at Assets/GameController.cs:24)
  • I've gone through the 50 things in Unity thing once and frankly I didn't understand half of it. I'll give it a shot again, just to force myself to level up... :)
  • @Tuism

    Try this rather

    public Sprite startingSprite;
    	private GameObject newBlock;
    	private SpriteRenderer newBlockSpriteRenderer;
    
    	void Start () {
    	}
    
    	void Update () {
    			//skipped conditions for instantiation for brevity
    			//it doesn't really instantiate every step.
    			GameObject newBlock = Instantiate(originalBlock,
    			                                  new Vector2 (-8 + blockColToMake, 11),
    			                                  transform.rotation)
    							as GameObject;
    			newBlockSpriteRenderer = newBlock.GetComponent<SpriteRenderer>();
    			newBlockSpriteRenderer.sprite = startingSprite;
    	}


    The Start method will run when the current gameObject is originally created (probably at the start of the scene) at this point the newBlock would still be null and you will get a null exception. It appears to me you want the script on the gameObject to control and modify the execution of another gameObject I would ask why you don't simply place an appropriate script on the new gameObject to control it instead of making adjustments from this one. Either way after creating the new game object you then need to look up the component you want to control.

    Hope that helps a bit. Although I think it might be worth taking a look at your full game architecture there might be a more elegant way to achieve your desired goals. If you at the next MGSA meet on the 10th would be happy to sit with you before or after to go over it
  • edited
    Is there any difference between doing it like that and doing it like this?

    newBlock.GetComponent<SpriteRenderer>().sprite = startingSprite;


    I got this bit to work, after some weird bug where I copy pasta'ed the exact same stuff into a new scene and it worked but it didn't work in the other scene. Strange. Unity seems to have its share of weirdness.

    Thanks! :D I'll of course absorb as much as I possibly can :) See you next meetup :D

  • Is there any difference between doing it like that and doing it like this?
    Other than readability, no. It looked like you wanted to keep the reference around (dues to the public member field) but if you're only using that reference to set the sprite, then that version is fine. Just keep in mind that if newBlock doesn't have a Sprite on it, you'll get a nullref (in both cases, but in the split case you can at least do a null check).
    I got this bit to work, after some weird bug where I copy pasta'ed the exact same stuff into a new scene and it worked but it didn't work in the other scene. Strange. Unity seems to have its share of weirdness.
    It does, but I doubt this is as a result of any of Unity's quirks. You probably just don't have the same objects set up in the different scene-not that you should even need to be doing this differently per scene. You should just add the same script to objects in the different scenes and set them up. Or even better, set up a prefab and drag that into the scene if possible, so if you change the setup once (on the prefab) it changes in all scenes.
  • edited
    Hey guys, Ive got a bit of a weird one here :/ I have the following piece of code in an objects Update method:

    foreach (MoMoAction a in _actions)
    			{
    				if (!a.HasPlayed && a.TimeStamp >= (t * (MaxAtbValue / RoundLength)))
    				{
    					a.Card.Play (a.CastingMomo, a.TargetMoMo);
    					a.HasPlayed = true;
    					PlayerInfo.Trainer.ProgressBar.Progress -= a.Card.TimeRequired / 100;
    				}
    			}]


    I have a break point on the if statement to check the values. I run the code in the immediate window and it returns true. I have another break point in the first line in the if statement, but that never gets hit? Am I doing something stupid here?

    Edit: Never mind...it was me being an idiot :P
  • edited
    False alarm! Solved!

    Turns out I had a nested GameObject with a collider that I was using for something else that was eating the collision.

    Still, thanks everyone :)

    Arrrrrgh >_< Halp >_<

    I have gameObjects, which are all different things, but they all have the GravityBehaviour script component on them which governs their physics things.

    In short, I use Physis2D.OverlapPointNonAlloc to get a list of stuff that's below it. Then if the object is still moving (GravityBehaviour.ySpeed != 0), it disregards it (so moving objects don't collide against moving objects. For now. Don't ask why yet cos that's not important. For now).

    The relevant code, I have left out a lot of the other stuff:

    public float ySpeed;
    public Collider2D[] results;

    Physics2D.OverlapPointNonAlloc (new Vector2 (transform.position.x,
    transform.position.y - test)
    , results);

    if (results [0] != null) {
    Debug.Log(results[0].gameObject.GetComponent<GravityBehaviour>().ySpeed);
    if (results[0].gameObject.GetComponent<GravityBehaviour>().ySpeed != 0) {}

    [s]The problem is that I can't get to the gameObject of the Collider2D that gets returned - I know that it's there, I can see it in the inspector when I step through the code, but apparently I can't get the GameObject that it's attached to with

    results[0].gameObject.GetComponent<GravityBehaviour>()

    Though Unity Script Reference seems to say you can. http://docs.unity3d.com/Documentation/ScriptReference/Component-gameObject.html

    Am I misunderstanding something?

    Thanks guys!!
  • Hi guys, I know I'm bombarding these questions but I've been stuck on the same thing for like days and I can't figure it out nor does any research help :(

    I'm using 2D physics, I have GameObjects that I want to detect when they overlap other GameObjects. They are being moved around the scene via Translate, which I understand will not trigger

    The main player GameObject has a BoxCollider2D (isTrigger), has a Rigidbody2D (is Kinematic)
    The other GameObject has a BoxCollider2D(not isTrigger), has Rigidbody2D (is Kinematic)

    The objects move via Translate, as I said, and I have this in the code on the main player GameObject's script:

    void OnTriggerStay2D (Collider2D other)
    	{
    		Debug.Log ("hit");
    	}


    I'm sure the rest of the script runs, as Update events all run and that's how things move.

    But this code doesn't do anything, console is silent. What am I missing? Everywhere I read triggers don't require force and movement and can detect with Translate. Am I missing something silly??? @_@

    Thanks again a million guys. Why is collision detection in Unity so damn hard @_@

    (in case anyone was wondering my game runs with Translate because any use of the physics engine make things "rub against" each other and throw off the physics I need. My blocks slide past each other. Whenever physics blocks go past one another, even at discreet exact numbers, they affect each other.

    Here's the example game for those who wanna see - all you can do so far is click somewhere in the box and the bear will jump-teleport there. The idea is that when he overlaps a block he'll hold it so you can throw it.

    http://www.tuism.com/share/ninchuck/

    (please let me know if it doesn't work)
  • edited
    @Tuism... it does not work. I get the download failed error message
  • Ah thanks @hermantulleken, it was a silly permissions thing. I need to remember that for the future... It's fixed and accessible!

    (I'm busy trying to rework my model to fit Unity's physics. Maybe that's what I need to do, stop working out "simple solutions" and work on fitting into their system... sigh.)
  • @Tuism
    and the bear will jump-teleport there. The idea is that when he overlaps a block he'll hold it so you can throw it.
    Can anyone say "Bear Chuck 2"....Oh happy days
    Thanked by 1Tuism
  • Just don't click outside the area bear will teleport there and fall to his doooooom :)
    Thanked by 1Tuism
  • @tbulford It's a prototype demonstrating a bug :P
    I've done a workaround and reverted to fit Unity physics instead of my own so the problem can be... considered irrelevant :P

    @fanieG thanks for your enthusiasm, I may just have to call it something else though... Ninchuck... :P
  • @Tuism Not sure if this is applicable but this documentation page (at the bottom) says that a kinematic rigidbody collider won't send a trigger message to another kinematic rigidbody collider.
  • @tuism yeah I get that still love tossing the poor thing to his dooom.

    I am not sure what your question is. Or what result you were looking for you are not getting right now. However I suspect you might want to drop the colliders altogether.

    If two Rigidbody objects collide then a OnCollisionEnter will be called. You should not need the colliders too. This is how it works for 3D anyway. Is suspect 2D will be the same take a look at http://docs.unity3d.com/Documentation/ScriptReference/Collider.OnCollisionEnter.html important from the text Note that collision events are only sent if one of the colliders also has a non-kinematic rigidbody attached.

    So I would suggest leave the bear as Kinematic make the blocks non Kinematic and change your script to OnCollisionEnter
  • edited
    My dudes now have full collider and rigidbody2D. If I remove the colliders they don't collide - it seems rigidbodies need colliders to affect physics - that's since I'm no longer trying to write my own physics - Unity is handling colliding and landing and whatnot now. I just crank up the friction and a bunch of seetings and it works pretty well. And round off positions whenever needed. There are slight glitches but it's ok.

    http://www.tuism.com/share/ninchuck/ <-- playable toy :D

    A quick question, is there a way to draw a Sprite onto the screen without having a GameObject? Like GMS's draw functions or something? I've been drawing my projectile trajectory by instantiating and destroying GameObjects, which seem to lag the game out a lot in the long term (memory leak??)

    Alternatively I guess I can Instantiate them only once and transform.position then every time they're needed on screen. Is that faster than instantiate/destroy every frame?
  • Alternatively I guess I can Instantiate them only once and transform.position then every time they're needed on screen. Is that faster than instantiate/destroy every frame?
    What you are talking about is called object pooling; it is a much better solution than allocating new garbage for every projectile.
  • edited
    Thanks for the link!

    I agree, that it would be better than creating objects for each shot, but I was wondering if there wasn't a way to draw the sprite without GameObjects? That would save even more cycles, would it not? (and be easier to code, if it's possible)
  • There are a few ways to draw without GameObjects, but all of them are bad in some way. Moreover, they go against the "Unity way", and in the long run will set you off on the wrong track.

    Pooling is one option.

    You could also use particles, if that could fit the look of the trajectory you are going for (I assume it could be made fairly similar to what you use now).

    There is also a trail renderer, which is sometimes useful for curve-like things and fairly efficient.

    Also... how many objects are you in fact destroying and creating (on average, approximately) each frame? Usually the instantiation itself would be the bottleneck to indicate that you should pool... not degraded performance over time. If it _is_ a memory leak, then you should rather try to fix the leak... (although I admit this can be tricky)... But is it also possible that it is something else? (In one game, for example, I had an object that would from time to time fall off the screen... over time thee physics would slow down as the object fell further and further into infinity). I just wonder because it's usually quite extreme circumstances where pooling is necessary (I have only seen it used twice in my life).
  • Are you destroying your projectiles after they become irrelevant?
  • @hermantulleken yes! Can I draw particles? I don't know how to, I've been looking, but I haven't found how to spawn a particle at location xyz. All I can find are emitters.

    I haven't figured out trails yet, will give it a look, but yeah I think I want points rather than a line. And it seems easier :P

    I am definitely not dropping stuff off the frame, that I know. In Jack King my stuff that went off into infinity would be destroyed at certain x and y positions, that object leaking type of thing I am prepared for.

    I tested it a bit where I teleport around a bit without throwing anything, vs throwing stuff around more. When I threw stuff it definitely lagged more - I therefore put it down to those things being instantiated/destroyed taking up memory.


    @farsicon we're talking about the bits that makes up the trajectory line in Bear Chuck prototype - and yes I'm definitely destroying them, heck, they're self destroying, so I just instantiate them and they kill themselves after 1 frame. A whole bunch of them get made each frame, making it very expensive.
  • For those trajectory lines, you can perhaps look at using a line renderer. I don't know how set you are on using a dotted line, but if I were doing it, I'd probably make a bunch of scrolling triangles/arrows or something. :)
    Thanked by 1mattbenic
  • Scrolling triangles/arrows? Does that change the way they're rendered? Right now I don't really care too much what they look like, It's really more about getting that line without lag.

    Line renderer - will look into it. The way I'm doing it now (iterating through a bunch of points each frame the line appears) it'd just be so much easier to draw at each point. Dunno why that's so hard in Unity that noone seems to know how XD
  • If you go the particle route, you can Emit them one by one:

    http://docs.unity3d.com/Documentation/ScriptReference/ParticleEmitter.Emit.html
    (See the Emit that also take positions).

    Alternatively, you could emit a bunch, and then manipulate their positions using the particle system - get all the particles, and update their positions as necessary.

    http://docs.unity3d.com/Documentation/ScriptReference/ParticleSystem.GetParticles.html
    http://docs.unity3d.com/Documentation/ScriptReference/ParticleSystem.Particle.html

    The latter approach is probably better if you move the trail around from frame to frame.



  • edited
    Has anyone experianced a build crashing when using d3d11 and changing to and from fullscreen ?

    Also has anyone had a problem with performance while in fullscreen with d3d9 ?

    Cause whenever I switch to fullscreen in windows 8.1 I get weird bugs. But on windows 7 its all fine.

    I am running windows 8.1 and fx4100 with a gtx 560. And using unity 4.3.1 but the same happens with 4.3
  • Hey guys,

    I'm busy working on a platformer. I have a particular type of platform that follows a path. I do his by pass through an array of positions that it must go through, then just lerp to the positions. My issue is that when my little main character dude is on the platform while its moving side to side, he stays behind :/

    I'm not applying a force to the platform because I'm using the lerp function, but is there a way I can calculate the force and then just apply that to my player?
  • edited
    @CiNiMoD I suggest you parent the character to the moving platform once the character comes into contact with a trigger near the moving platform. That has worked fine for me but I am sure you can calculate the velocity of the platform and then apply that to the player.

    Of the top of my head I think you could add something like this to your character control script.
    if (PlayerOnMoveingPlatform == true)
    		{
    			GetComponent<Rigidbody2D>().velocity = CharacterVelocity + MovingPlatform.GetComponent<Rigidbody2D>().velocity;
    		}


    Edit : You would need to work out the CharacterVelocity beforehand.

  • if (PlayerOnMoveingPlatform == true)
    		{
    			GetComponent<Rigidbody2D>().velocity = CharacterVelocity + MovingPlatform.GetComponent<Rigidbody2D>().velocity;
    		}


    Edit : You would need to work out the CharacterVelocity beforehand.
    Yeah, the issue I have with this is I am not applying any forces on my block, so velocity will always return 0,0,0 :(
  • Hey ya'll! Hope everybody is having / had a good holiday period thingy-magig!

    Sooooo I'm chilling, codin' 'n shit... When my inspector stops updating. Full run-down of the issue:

    - When I change the data of a public variable in my script and save the script, the same data doesn't reflect my changes in the inspector.

    - When I change the data in the inspector, the code doesn't change (not sure if this is intended or not... hope not).

    - No matter what I do, if I've ever changed the data of a variable in the inspector, the game will always take the data from the inspector and ignore any modifications to that specific variable in the script.

    - The only way for the game to use the data modified in the script, is to right-click the inspector and 'reset' it. Then it changes all the values to what I have in the script.

    Now, I'm aware that when you run the game and make changes in the inspector, the changes will reset when you stop running the game. That's fine, I know that's a feature in Unity and that's not my issue at all. I'm a bit concerned that I can't really go back and forth between the inspector and my script without seriously keeping track of where things have changes and which values I want to keep etc. It's just a bit of a mess, where I believe it should be pretty straight-forward.

    Can anybody advise on this?
  • edited
    Where are you assigning to the variable in your script? Because if I understand you correctly, that's the intended behaviour.

    When you set a public variable in your script outside of one of the runtime methods, you're setting what the default value is that it shows up as. When you change the value of that variable in the Inspector, then Unity marks that variable as having been edited, and keeps that as the edited value regardless of what you change the default value to in your script.

    If you're changing the value inside one of the runtime methods (Awake, Start, Update, etc.) then it should update in the Inspector as well while the game is running.

    Changing anything in the Inspector will not change any of your scripts.
    Thanked by 1DarkRa88iT
  • Thank you!

    I am indeed setting these variables outside of both Start and Update. When is Awake executed? Is it the area outside any functions like Start and Update and any custom functions that I've created? (Please excuse my noob-ness)

    I'm ok with some of my public variables just being the default values (like some of the booleans that I don't really need to modify myself). But I would prefer it if I can rapidly modify and check the result of something like a float that dictates the strength of something. In order to do that, should I then rather put them in the Start function? From your explanation, it seems like the way to go, I just want to be sure that it wouldn't be seen as bad practice or somehow impact on performance*.

    * A note on performance:

    I know somebody at a MGSA meetup once told me that you shouldn't break yourself trying to optimize from the start, but rather do what works and if you have any performance issues, try to iron those out. But it doesn't hurt to learn best-practice methods from the start, does it?

    Considering the ridiculous simplicity of the game I'm making right now, I could probably go bananas and never run into any problems, but I'm looking to the future. If I ever want to attempt slightly bigger projects, I don't want to be held back by my lack of knowledge.

    Thanks again for your response, @Elyaradine!
  • Um. @MattBenic posted something earlier in this growing thread with a link to something that lists a bunch of built-in Unity methods, and the order in which they execute. Awake() is executed the moment something is created, Start() is the moment it's enabled (happens after Awake) and Update is every frame. There are a couple of others, though I think those three are the most common.

    If it's just tweaking numbers, I'd just do that in the inspector while the game's running, and copy those values back in when you've stopped the game. (It's a hassle if there's a whole bunch of values that you're tweaking at the same time, in which case: you could create a prefab out of what you've got while the game's running, and the values should save; or you could write an editor script that copies them for you.)

    And yeah, I throw my vote into forgetting about performance until you're finding that you've got some kind of lag that's actually stopping you from play-testing your game. And when that happens, you'll be able to see first-hand what kind of costs things have when it comes to performance (and see exactly what happens, and remember it more clearly) rather than following something just because somebody (who may or may not actually know what he's talking about) once said it'd be faster.
    Thanked by 1DarkRa88iT
  • edited
    @DarkRa88iT
    But it doesn't hurt to learn best-practice methods from the start
    Indeed (well, to some extent).

    Good practice is to not worry about efficiency too much; when you do run into problems, to measure it and then act accordingly.

    You will find many lists of efficiency best-practices, and about half of them are wrong, and there are several reasons for this:
    • Things change.
    • Things are different on different platforms.
    • Things have different levels of importance in different contexts.
    • Things interact in unpredictable ways, leading to different results in different circumstances.
    There are also other reasons to not worry about efficiency when you do not need to:
    • Efficient programs are (often) more complex and harder to understand (ugly). This leads to bugs, and ironically, even to slow code. In particular, efficient code is very hard to optimise further (not because it runs at the limit, but because it's difficult to make changes).
    • The chances are good that you may optimise the wrong thing.
    • Efficient code is more expensive to write. The extra effort required could be better spent on other things.
    If you really want to concentrate on good programming practice, you can't go wrong by learning how to write readable code, and design simple, transparent "architectures". Readable code is also more expensive to write, but it pays off way better than anything else a programmer can do. One advantage is that well-designed, readable programs are much easier to optimise.

    Having said that, I would recommend one book though: Programming Pearls. It is not about efficiency per se, but they talk about optimising on different levels, and also give some concrete examples of how programmers go about it. In the game world, we are pre-occupied with optimising at a very low level (don't do this, don't use that), forgoing opportunities to optimise algorithmically, for example. (The book is a good programming book for many other reasons too).

    Edit
    Also: http://stackoverflow.com/questions/385506/when-is-optimisation-premature
  • Hey guys,

    So Im working on a Lemmings style game. One thing that I am battling to figure out is this. One of the powers that can be assigned to a little creature is giving it the ability to dig. So now I have a ground object, how do I destroy only a section of it so that the creature can fall through the ground?
  • CiNiMoD said:
    Hey guys,

    So Im working on a Lemmings style game. One thing that I am battling to figure out is this. One of the powers that can be assigned to a little creature is giving it the ability to dig. So now I have a ground object, how do I destroy only a section of it so that the creature can fall through the ground?
    You're going to have to split your terrain object into multiple smaller objects dynamically. Then it becomes a matter of managing your collisions correctly so that you can remove only those meshes that are being hit by your "eraser" shape... Or, if your terrain is 2D and you've got pixel-level access to a dynamic texture, you can edit the texture manually (usually by drawing in the alpha channel) when you want to carve out sections.
  • CiNiMoD said:
    Hey guys,

    So Im working on a Lemmings style game. One thing that I am battling to figure out is this. One of the powers that can be assigned to a little creature is giving it the ability to dig. So now I have a ground object, how do I destroy only a section of it so that the creature can fall through the ground?
    There are a couple ways to get "destructible" terrain. I like what @dislekcia mentioned, in 2D alpha masking to dynamically change what the player sees and what collision boundaries are is a good approach.
    However, in 3D there are a couple ways to do this: replacing the object dynamically, as dislekcia suggested; depending how far you are, graphically, you could consider implementing voxels; you could also change the height map info. These all have different pro's and con's, depending on usage, and may or may not be as useful for you. Here's a couple of links I found I hope they help.

    http://answers.unity3d.com/questions/11093/modifying-terrain-height-under-a-gameobject-at-run.html
    http://u3d.as/content/different-methods/terrain-destruction/2ze
    http://www.sauropodstudio.com/how-to-make-a-voxel-base-game-in-unity/
    http://studentgamedev.blogspot.com/2013/08/unity-voxel-tutorial-part-1-generating.html
  • Hai guys

    We're working with sprites and animations in Unity, and was wondering if there was a way to pause an animation at a certain frame? We found an Animation.stop function but it has the caveat "Stopping an animation also Rewinds it to the Start."
    ... which kinda defeats the point of pausing at a specific frame.

    Is there a way anyone knows of?

    Thanks! :D
  • IT WOORRRKS!! Thanks Ed!! :D Didn't think to google "pause" instead of "stop". XD
  • Yeah, it's a silly thing but somehow stop means pause and go back to the beginning of whatever is playing. I'm not even sure where I came to that realization but it helps when working with things like this. :)
  • Also worth nothing that you can also manually set AnimationState.time or AnimationState.normalizedTime to set an animation to a specific point.

    You can even use it to step through the frames and play animations independent of Time.timeScale.
    Thanked by 1Tuism
  • Squidcor said:
    Also worth nothing that you can also manually set AnimationState.time or AnimationState.normalizedTime to set an animation to a specific point.

    You can even use it to step through the frames and play animations independent of Time.timeScale.
    Probably important to bear in mind that you may need to call Animation.Sample() in order to force an animation to 'skip' to the frame you've set this way, especially if it's not already playing. It IS, however, nice to now that that technique works in edit mode, so you can use it to build animation preview and setup tools..
  • Hey guys. A small (yet very annoying) problem I'm having is the following.
    When raycasting
    Physics.Raycast (transform.position, transform.TransformDirection (Vector3.right), 1F)
    I have to use a whole float (not a partial, like 0.5F), otherwise it does not work...unfortunately it then pickup further than I would have liked.

    Any ideas?
  • It's a bit difficult to comment from that, but have you tried to use Debug.Ray or Debug.drawLine yet (can't remember exact methods), those are quite useful to check these things visually.
  • SquigyXD said:
    Hey guys. A small (yet very annoying) problem I'm having is the following.
    When raycasting
    Physics.Raycast (transform.position, transform.TransformDirection (Vector3.right), 1F)
    I have to use a whole float (not a partial, like 0.5F), otherwise it does not work...unfortunately it then pickup further than I would have liked.

    Any ideas?
    Do you get a collision structure back or a collection of collisions? If you do, it should be a simple matter of checking through those collisions to see how far along the ray they happened, if they're too far along it, you can just ignore them.
Sign In or Register to comment.