Unity general questions

Comments

  • I have never had this problem, so this is just a shot in the dark, but can't you save the distance between the 2 points e.g.

    newDistance = distance2 - distance1

    and then use that variable in your raycast declaration i.e.

    Physics.Raycast(transform.position, transform.TransformDirection(Vector3.right), newDistance)

    That might allow for the partial float? Like I said that is just a guess though. Sure 1 of the better coders will correct me.
  • @farsicon,@dislekcia & @FanieG...thanks guys...just realized my mistake (wow I feel stupid).

    So it turns out that my character is exactly 1 unit long...that means 0.5 from the center...that means that unless a physics object clipped through it would have never detected it.

    Again thanks guys for taking the time to help fix/find/humiliate me =P
  • I have fallen prey to that exact issue countless times (I seem to ray-cast just at the right frequency to forget the issue each time).
  • @SquigyXD - So how did you fix it? Did you use a layerMask, ignore the player's when raycasting or did you move the origin point of the raycast? Just asking for if I ever get a similar problem.
  • Just for interest:

    void OnBecameVisible() runs when an object enters the camera's view. This method also triggers for the EDITOR CAMERA!!!
  • Hey guys!

    We're running into an unexpected problem in Unity publishing to iOS - timing of things seem to be different. Here's the scenario:

    We have two attacks that follow each other exactly when you tap twice. On the opening screen we have two characters, and if you tap twice, you will hit both of them - because it follows *exactly*.

    Running the game in Unity editor and web build yields the desired results.

    However when we build to an app, the two chained hits only hit the first and misses the second. It seems like the *speed* of the animation/game/iteration is different between the editor/web build and the app.

    What could cause this? And is there anything we can do to ensure that it doesn't differ between the two builds???

    Thanks so much!!!!
  • iDevices ALWAYS have vsync on, and it sounds like this is the problem you're having. By default, Unity games are set to sync every second vblank in this case, and it's possible that the resultant 30fps could be causing your problem.

    Try setting Application.targetFrameRate to 60 and see if that helps, and make sure you don't have your VSync count in quality settings set to sync on every second.
    Thanked by 1Tuism
  • edited
    @Tuism, it's hard to tell without looking at the code, but a common culprit for this is frame rate dependent code. If you use distances or timings in the update functions you should generally multiply them by Time.deltaTime (which is the time that passes between frames) to make it frame rate independent.

    Edit: @Chippit probably knows more...listen to him first :P
    Thanked by 1Tuism
  • edited
    @Rigormortis: You are not wrong! The iOS fixed frame rate merely exacerbates that code flaw, though, and is an easy test.
    Thanked by 1Tuism
  • Ok thanks so much for the insight guys, I know about time.deltaTime but wasn't sure where it might apply... I'll pass these suggestions to @Loet when he's up (he beds early).

    We found some stuff about stepsteps and going from 0.02 to 0.03, which didn't seem to help as it was consistently bugging out and therefore wasn't a performance issue.

    Thanks guys!

    On another note - we've noticed that at parts of our game, especially the beginning bits, there'd be lags so large that a collision that would have happened would be skipped entirely (zombie passes through player). Because it happens earlier and gets better, we suspect it's a loading issue. Are there guidelines to how to preload content to make sure it doesn't try to load stuff as it goes?
  • Are you seeing this in the editor or in builds? The behaviour between the two is slightly different.

    On deployed builds, Unity has deep reference trees and does preload all required assets in your scene, and everything required by prefabs your objects might have references and the like. HOWEVER, it doesn't necessarily shunt the required materials/shaders/textures/etc. to the GPU until before they are drawn for the first time, so you often get spikes when new materials are used for the first time, especially materials with new shaders.

    Try Shader.WarmupAllShaders for that, it covers most of your cases.
    Thanked by 2Tuism FanieG
  • Chippit said:
    Try Shader.WarmupAllShaders for that, it covers most of your cases.
    Only gotcha with WarmupAllShaders is that it could suck up a lot of memory as it will load the shaders for all permutations... and it could increase load times a lot on mobile too. More than likely a moot point with Dead Run, but with Bladeslinger, it would cost us somewhere around 20 seconds additional load time, and IIRC, 20-30mb additional RAM used.

    As with all things in Unity... never trust and test test test!
  • Hi, there's this thing that seems utterly retarded, dunno how this isn't working :(

    This was declared right at the top:
    private Vector3[] p1LastBlockDrawnPos;

    Then in the code I have this, after I instantiated a newBlock. That part's known to work.
    newBlock.transform.parent = p1BlocksControl.transform;
    p1LastBlockDrawnPos[p1BlocksMade] = newBlock.transform.position;

    The code runs, but I keep getting this error:
    NullReferenceException: Object reference not set to an instance of an object

    Which points to the 2nd line of the code above. It seems like I'm not declaring the variable properly, or not filling in the array properly so it's hitting a null reference...

    What am I doing wrong? XD

    Thanks awesome people!
  • @Tuism: is there something like
    p1LastBlockDrawnPos = new Vector3[a number];

    anywhere in your code? If not, then it's not actually an array at all, it's a variable that points to nothing, which is what that error is telling you.
  • OH.

    I think I just realised that an array must have x elements in it before you can start using it, unlike Lists, which I'm used to. OK thanks!
  • edited
    Can "a number" be 0, so the array is ready to take in more elements, without it being pre-defined?
  • edited
    No. Arrays are fixed-size in memory, and are immutable (you can't change them). If you DO want variable size lists (which is really just a wrapper around arrays that copy and grow), then look in Systems.Collections.Generic for List<T>.

    using System.Collections.Generic;
    ...
    List<Vector3> p1LastDrawnBlockPos = new List<Vector3>();
    ..
    p1LastDrawnBlockPos.Add(newBlock.transform.position);
    Thanked by 1Tuism
  • @Tuism: Either way, you're still going to have actually create that list in the first place. If you don't, back to the null reference error.
  • edited
    I feel like a real retard when I'm spamming questions like this :( But I really can't work it out and it's driving me nuts and google doesn't help :(

    I have this code that checks a position to see if it hits a block. The scene looks like this:
    image

    The code:
    bool CheckPosForBlock (Vector3 where) {
    		Ray ray = Camera.main.ScreenPointToRay (where);
    		Debug.DrawLine (Camera.main.transform.position, where);
    		RaycastHit2D hit = Physics2D.GetRayIntersection (ray, Mathf.Infinity);
    		if (hit.collider != null) {
    			return true;
    		} else
    			return false;
    	}


    It works, usage like this, works:

    mouseIsOverBlock = CheckPosForBlock(Input.mousePosition);


    But when I use it differently to check if there's a block to the left or right or whatever (exactly 1 block) of where I'm pointing, it doesn't work, they all return as false:

    Vector3 mousePosition = Camera.main.ScreenToWorldPoint (Input.mousePosition);
    			mousePosition = new Vector3 (Mathf.Round (mousePosition.x), Mathf.Round (mousePosition.y), 0);


    if (	CheckPosForBlock(new Vector3 (mousePosition.x - 1, mousePosition.y,mousePosition.z))
        || 	CheckPosForBlock(new Vector3 (mousePosition.x + 1, mousePosition.y,mousePosition.z)) 
        ||  CheckPosForBlock(new Vector3 (mousePosition.x, mousePosition.y + 1,mousePosition.z))
        ||  CheckPosForBlock(new Vector3 (mousePosition.x, mousePosition.y - 1,mousePosition.z))
    	) {
    	do stuff
    }


    (assume only the left one, that is, x-1, is on in the screenshot)

    There's a debug ray in the CheckPosForBlock code, and strangely enough, when it's working, the editor shows the ray shooting off elsewhere into the distance (as you can see in the screenshot), whereas when it doesn't work, the ray shoots AT THE RIGHT SPOT, i.e. at a block, though it doesn't pass through it (could that be the problem??)

    Am I missing something obvious???

    Much appreciated guys!!

    (the gist of what I want to do here is when the mouse is down, it draws a block where the mouse is, and stops drawing because the mouse is on it. Then when the mouse moves off, it checks the spot to make sure it's adjescent to another block, then puts down another block. This way it won't draw blocks into infinity, and they're uninterrupted and continuous)
    blockproblem.jpg
    743 x 875 - 144K
  • Forgive the terseness of response, mobile devices aren't ideal for this.

    Your problem is that you're working in two different coordinate spaces. The original example works because mousePosition is in screen space, and your ray creation operates on screen space coordinates. The second sends positions to CheckPosForBlock in WORLD space, and so your ray creation is thus completely off. To fix, you can create the ray in world space too, using the normal ray constructor with the origin being the world space camera coords, and the direction being a vector from there through your calculated world position.

    That said, while it will likely work in most cases, raycasting this way is probably not the most durable way to achieve what you want, and you might be better off storing your blocks in some sort of position-indexable data structure that you can look up from.
  • edited
    A kinda simple question, I think:

    I'm using this to make blocks move every once in a while. The block object contains child objects. No biggie, really.
    
    	void Update () {
    		if (dropCount <= 1f) {
    			dropCount += Time.deltaTime;
    		} else {
    			dropCount = 0;
    			transform.position = new Vector3 (transform.position.x, transform.position.y+1f, transform.position.z);
    		}
    	
    	}


    The (very raw) prototype:

    But it feels like they move... delayed. Like each of the blocks move into position split seconds after other blocks. Like... Interlaced TV or something. Am I just cuckoo or is there some simple convention that I'm not following?

    Web build: http://www.tuism.com/madtris
  • A slight logical flaw is that you probably want to do this instead:

    dropCount += Time.deltaTime;
    if (dropCount >= 1)
    {
    	// Important to subtract one instead of set to 0, because that throws away fractional values that causes your timing to mess up
    	dropCount -=1; 
    	// Do stuff
    }


    I don't know if that's strictly your problem, though. My intuition tells me that all your objects, if they're created and enabled on exactly the same frame, should fail in exactly the same way and thus move on the same frame too.

    BUT if I assume that your objects may not necessarily be created on the same frame, you probably want to do this operation relative to the global Time.time instead of having each individual object keep its own timer, and you just set each object with its creation time to use as an offset from that. OR just have all your objects controlled by one master script that handles all the motion (so that individual groups move on the same timer, assuming that's the behaviour that you actually want)
    Thanked by 1Tuism
  • Thanks @Chippit!! Appreciate it much!

    I found out where my code was going wrong, it turns out I was putting the blocks into separate containing containers, so they moved out of sync... But this is where I don't understand why they ended up in separate containers:

    Simply, when there're 4 blocks, which are all made children of p1BlocksControl (confirmed as I can see it in the editor), a public variable and permanently an element on the scene, this runs, and the way I have it now, the foreach only hits/sees two of the four blocks, so only 2 of the 4 blocks move away.


    if (p1BlocksMade == 4) {
    				GameObject newP1Drop = Instantiate (p1DroppingController, 
    				                                    new Vector3(Mathf.Round(Input.mousePosition.x), Mathf.Round (Input.mousePosition.y), 1f), 
    				                                    Quaternion.identity) as GameObject;
    				foreach (Transform genericBlock in p1BlocksControl.transform) {
    					genericBlock.gameObject.transform.parent = newP1Drop.transform;
    				}
    
    			}


    Now what's funny is if I take the foreach lines (3 of them) and copy them two more times like below, all 4 blocks are correctly allocated into the new object. Why the heck is that? Is there some kind of delay in running a foreach that results in some kind of weird behaviour? (I remember the foreach is bad thread, but I don't understand it, it seemed to only be about performance, not actual stuff being broken?)

    if (p1BlocksMade == 4) {
    				GameObject newP1Drop = Instantiate (p1DroppingController, 
    				                                    new Vector3(Mathf.Round(Input.mousePosition.x), Mathf.Round (Input.mousePosition.y), 1f), 
    				                                    Quaternion.identity) as GameObject;
    				foreach (Transform genericBlock in p1BlocksControl.transform) {
    					genericBlock.gameObject.transform.parent = newP1Drop.transform;
    				}
    				foreach (Transform genericBlock in p1BlocksControl.transform) {
    					genericBlock.gameObject.transform.parent = newP1Drop.transform;
    				}
    				foreach (Transform genericBlock in p1BlocksControl.transform) {
    					genericBlock.gameObject.transform.parent = newP1Drop.transform;
    				}
    
    			}

  • edited
    @Tuism, the problem (as far as I can tell by looking at it) is that by changing the parent of your genericBlocks you are actually changing the children of p1BlocksControl's transform. This means that by the time the next loop iteration comes around, the items that the loop is supposed to be iterating over has changed. You can try storing references to the children before changing them, and then iterating through those.

    Ala:
    Transform[] childBlocks = new Transform[p1BlocksControl.transform.childCount];
    
    for (int i=0 ; i < childBlocks.Length ; i++)
    {
        childBlocks[i] = p1BlocksControl.transform.GetChild(i);
    }
    
    foreach (Transform child in childBlocks)
    {
        child.transform.parent = newP1Drop.transform;
    }


    Edit: Or even better...

    while (p1BlocksControl.transform.childCount > 0)
    {
        p1BlocksControl.transform.GetChild(0).parent = newP1Drop.transform;
    }
    Thanked by 1Tuism
  • edited
    Yay it works! I'm struggling to get that logic and understanding into my head though - so when foreach does its thing, it's actually iterating through a list of things that are children in the transform, and when I take each out, the order screws up, so:

    There was four things, it evaluates thing one:
    1 2 3 4
    And takes 1 out:
    2 3 4
    It moves to the "second" element:
    2 3 4
    And taks that out:
    2 4
    And thinks it's done cos there's no 3rd element.

    OMG How the hell was anyone supposed to know that's what foreach does?? I would have thought it would go through the content of the container until, you know, EACH is achieved?

    Also,

    Does that mean that each child of another gameobject can be accessed via GetChild(i) up to childCount? They're always numbered and ordered (by order of creation I'm guessing)? That's also new to me :)

    Thanks so much!!!! There was NO WAY I would have been able to figure this one out myself. OMG.
  • edited
    Yeah, I think @Squidcor's on to it. Usually C# collections throw exceptions when you iterate with a foreach and the collection changes. All the native ones do, but I expect since Unity overrides things to implement their own behaviour (don't get me started on their idiotic bool implicit converstion and Equals() overrides), they've probably neglected to do it, possibly for optimisation reasons, or possibly because they're dumb.

    As an alternate to Squid's solution, another handy trick to be aware of is counting backwards through enumerations, which also handily solves problems where you need to remove elements from lists that reorder their own elements when they are removed.

    for (int i = childBlocks.Length - 1; i >= 0; i--)
    {
        BlocksControl.transform.GetChild(i).parent = newP1Drop.transform;
    }


    As a side note, I was totally unaware that you could foreach through a transform to iterate its children. I don't even see that behaviour documented anyway. Useful shorthand, that!

    EDIT for ninja:

    Also, your description isn't strictly true, it's really down to the implementation of the enumerator for that particular class, and they could be handling it in all sorts of undocumented ways. Foreach is a special kind of synctactic sugar that operates on IEnumerator objects to step through a collection. In order for this to even be valid, the behaviour I described above, where exceptions are thrown when the collection changes, MUST be happening. If it doesn't, it's a flaw in the implementation. Which is to say, what you're seeing is likely a Unity bug.
    Thanked by 1Tuism
  • edited
    Hi guys.

    First of all I wanna really thank everyone who's helped and continue to help here, I really feel like I'm parasiting off all you great people and that makes me feel icky. Then I realise that I would do the same for anyone here if I could and then I understand what it's all about and I feel less icky :)

    Thank you ^^



    False alarm, it was bad bracketing >_<

    But really, I meant the first part :)
  • edited
    There's something very wacky with the amount of braces in your code, especially in your Update() function - don't you get compiler errors? it looks like you pasted the else part before the end brace of the for loop. This may cause your compiler to not recognize the code following after the error.

    }
           } else {
             transform.position = new Vector3 (transform.position.x, transform.position.y+1f, transform.position.z);
           }


    should be:

    } else {
             transform.position = new Vector3 (transform.position.x, transform.position.y+1f, transform.position.z);
           }
           }


    Also a note -> if you don't have start and end braces around code after if/while/for then only the first following statement (until the next ";" ) is seen as part of the "loop" and the lines after that is seen as a new command. For example:

    if (hit.collider.transform.tag == "block_stopped") {
             Debug.Log('y');
             return true;
           } else
             Debug.Log('n');
             return false;


    The last return false will not be seen as part of the else statement in your if. To fix it, you have to add braces or remove the debug.log. See below:

    if (hit.collider.transform.tag == "block_stopped") {
             Debug.Log('y');
             return true;
           } else {
             Debug.Log('n');
             return false;
          }



  • farsicon said:
    There's something very wacky with the amount of braces in your code, especially in your Update() function - don't you get compiler errors? it looks like you pasted the else part before the end brace of the for loop. This may cause your compiler to not recognize the code following after the error.

    [b]}[/b]
           } else {
             transform.position = new Vector3 (transform.position.x, transform.position.y+1f, transform.position.z);
           }


    should be:

    } else {
             transform.position = new Vector3 (transform.position.x, transform.position.y+1f, transform.position.z);
           }
           [b]}[/b]


    Also a note -> if you don't have start and end braces around code after if/while/for then only the first following statement (until the next ";" ) is seen as part of the "loop" and the lines after that is seen as a new command. For example:

    if (hit.collider.transform.tag == "block_stopped") {
             Debug.Log('y');
             return true;
           } else
             Debug.Log('n');
             return false;


    The last return false will not be seen as part of the else statement in your if. To fix it, you have to add braces or remove the debug.log. See below:

    if (hit.collider.transform.tag == "block_stopped") {
             Debug.Log('y');
             return true;
           } else[b]{[/b]
             Debug.Log('n');
             return false;
       [b]}[/b]



    Yes - bad braces made it not work, but I've fixed the braces and it still doesn't work, so now I'm wondering about some other problem... Basically it looks like it's not checking correctly and going through things, and my debug ray draw says it's always checking in the same spot, unless my debug ray is wrong.

    But thanks for that bit! On to the next part XD
  • @tuism: :) cool. feel free to share if you need more help.
  • @farsicon thanks so much bro ^^ So much appreciate ^^

    ---------------------

    So I've been trying to figure out a simple, global, I-can-use-it-anywhere collision detection type thing that I never have to think about ever again. Cos each time I have to do collision detection in Unity I lose a few years off my life. AND YET I PERSIST.

    Anyway so when I use this piece of code in the master game controller it worked (detecting if there is an object next to the mouse click). Now I want to use the exact same function in another GameObject, detecting for the same GameObject but a different tag, so I copy pasted this into the other GameObject, and called it... And according to the debug ray, it's always going from the screen centre to the same spot... And not detecting anything at all.

    The new GameObject housing this code moves upwards along the Y axis, one unit at a time, if it means anything.

    bool CheckPosForWall (Vector3 where) {
    		Ray ray = Camera.main.ScreenPointToRay (where);
    		Debug.DrawLine(Camera.main.transform.position, Camera.main.ScreenToWorldPoint(where));
    		RaycastHit2D hit = Physics2D.GetRayIntersection (ray, Mathf.Infinity);
    		if (hit.collider != null) {
    			if (hit.collider.transform.tag == "block_stopped") {
    				Debug.Log('y');
    				return true;
    			} else
    				Debug.Log('n');
    				return false;
    		} else
    			return false;
    	}


    HOW OH HOW do I do simple collision detection - or "is there an object with tag X at this location" function in Unity?!?!?!?! ARRRGghhhh XD

    And don't say physics cos this game doesn't physics at all! :)
  • @Tuism: What are you passing in as your "where" vector? If it's a vector that's not in world space (alternatives are localposition, etc), you'd get the behavior you're describing.
  • A few comments that may help:

    you are not debugging the same ray as your actual ray variable. They use different methods.

    also, use layers and layer masks to restrict or control collisions - tags are messy and does not perform well. Is ot possible that your layer setup could be different between the old gameobject and the new one where you are using this code?
  • This is what I'm using to pass the "where", it is on the GameObject that is moving to move the blocks inside it.

    foreach (Transform genericBlock in transform) {
    				if (CheckPosForWall(new Vector3(transform.position.x, transform.position.y + 1f, transform.position.z))) {
    					isBlocked = true;
    					//block stopped
    				} else {
    				}
    			}
    			if (!isBlocked) {
    				transform.position = new Vector3 (transform.position.x, transform.position.y+1f, transform.position.z);
    			}


    I figured it could be a local/worldspace thing, but I don't see why it would be cos I'm not using any local anything as far as I can tell... Unless foreach changes things?
  • dislekcia said:
    @Tuism: What are you passing in as your "where" vector? If it's a vector that's not in world space (alternatives are localposition, etc), you'd get the behavior you're describing.
    That's not entirely true. In this case you actually need a screen space 2d vector (preferable the mouse position) as the ray is cast perpendicular from screen to world and the result would be a hit in world space. Even though it is a Vector3 the z is ignored.

  • So I've been looking at my rays, and I don't understand why they're not matching up...

    Ray ray = Camera.main.ScreenPointToRay (where);


    According to Unity documentation, "Returns a ray going from camera through a screen point.". Does that mean the screen point on the plane of the camera, or the screen point on the plane of z = 0? So "where" is a point on z = 0, so I thought this would be a shot through the exact placement of the object where "where" is.

    Debug.DrawLine(Camera.main.transform.position, Camera.main.ScreenToWorldPoint(where));


    According to Unity documentation, Debug.DrawLine uses Worldspace for the start and end - so I used a ScreenTorWorldPoint on the where to determine the end point.

    ...So what's wrong here?? >_<
  • Yes, the camera view plane is your screen in world space (the back of the view frustrum). Just use debug.drawray(ray) - reuse your own ray variable. Just specify a color and time like 5 or something to make it visible longer.
  • @tuism - apologies, debug.drawray does not just take the ray directly - use this instead:

    Debug.DrawRay(ray.origin, ray.direction, Color.yellow, 5.0f);
  • farsicon said:
    That's not entirely true. In this case you actually need a screen space 2d vector (preferable the mouse position) as the ray is cast perpendicular from screen to world and the result would be a hit in world space. Even though it is a Vector3 the z is ignored.
    Yeah, okay. That's why I was asking what was being passed into the function, figure out which coordinate space it was in.
    Tuism said:
    So I've been looking at my rays, and I don't understand why they're not matching up...

    Ray ray = Camera.main.ScreenPointToRay (where);


    According to Unity documentation, "Returns a ray going from camera through a screen point.". Does that mean the screen point on the plane of the camera, or the screen point on the plane of z = 0? So "where" is a point on z = 0, so I thought this would be a shot through the exact placement of the object where "where" is.

    Debug.DrawLine(Camera.main.transform.position, Camera.main.ScreenToWorldPoint(where));


    According to Unity documentation, Debug.DrawLine uses Worldspace for the start and end - so I used a ScreenTorWorldPoint on the where to determine the end point.

    ...So what's wrong here?? >_<
    You're thinking in the wrong coordinate spaces, that's the issue.

    A screen point is a 2D point in pixels from the bottom left corner of the screen, it's only relevant coordinates are x and y. This is often called screen space.

    A ray is a 3D construct that was a starting point (it's origin) and a direction. Rays can be in world space or local space - the difference is where they are seen as being "relative" to. A world space ray is positioned relative to (0,0,0) in your world, a local space ray is positioned according to the center of some other transform. You can convert a local space ray to a world space ray and back, but it's important to know where you're getting things from and why when you use them so that you're always comparing things in the same set of spaces.

    ScreenPointToRay() is looking for a screen point to turn into a ray in world space. It reads the x and y values of a vector you pass it as pixel positions and then creates a ray in world space that passes through that point on both the near clipping plane and far clipping plane of the camera. This is usually useful for testing if something is "under" the mouse in your simulation, because mouse coordinates tend to be in screen space naturally. We use this in DD a lot.

    You're giving ScreenPointToRay() a position in world space. It looks like your world space is much, much smaller than your screen space (an entire in-game block is 1 unit, right? A single pixel is 1 unit in screen space) so the resulting ray is coming from somewhere like pixel (4, 5) in screen space. The rays look like they're always in the bottom left corner, right?

    Your Debug.DrawLine() is actually correct because you're reversing the math you're doing on the position, you're drawing a line where you're doing the test. It's just that where you're testing is completely wrong ;) ... It does actually move "up" as well, but only by a single pixel at a time, so it's probably hard to see.

    To fix it you need to stop trying to convert to screen space and just work in world space. Unfortunately I'm not 100% sure that RayCast2D is actually what you want to be doing, I suspect that's only casting rays on a single plane, so passing in a ray that's in 3D world space (albeit in the wrong location in that world space) is probably not the right thing to do. What happens if you put a block where the ray currently sits?

    Where did you get this from? This might work in projects with orthographic cameras where 1 pixel in screen space = 1 pixel in world space, but the RayCast2D might still be borking it...
  • dislekcia said:
    A ray is a 3D construct that was a starting point (it's origin) and a direction. Rays can be in world space or local space - the difference is where they are seen as being "relative" to. A world space ray is positioned relative to (0,0,0) in your world, a local space ray is positioned according to the center of some other transform. You can convert a local space ray to a world space ray and back, but it's important to know where you're getting things from and why when you use them so that you're always comparing things in the same set of spaces.
    Where did you get this from? A Ray is a world-space-only construct as far as I know - without the ability to parent it to something in world space, there's no way to get a relative transform for it. Have you been able to get it to work in local space? I would be very interested to see how you did that, it could be quite useful.
  • edited
    farsicon said:
    Where did you get this from? A Ray is a world-space-only construct as far as I know - without the ability to parent it to something in world space, there's no way to get a relative transform for it. Have you been able to get it to work in local space? I would be very interested to see how you did that, it could be quite useful.
    Dunno, going off the math involved, a ray is agnostic of actual space. Having never used them in Unity, I dunno if they're actual in-engine objects or not, I would assume not. They simply have a vector position and direction (either a quaternion or another vector), right?

    Rays in XNA were just a mathematical construct to make intersection tests easier, it was up to you to ensure that you were calculating your ray positions in the correct space so that collisions were meaningful. You get out what you put in. Does Unity give them an entire transform as well and convert rays to world space coordinates?
  • dislekcia said:
    farsicon said:
    Where did you get this from? A Ray is a world-space-only construct as far as I know - without the ability to parent it to something in world space, there's no way to get a relative transform for it. Have you been able to get it to work in local space? I would be very interested to see how you did that, it could be quite useful.
    Dunno, going off the math involved, a ray is agnostic of actual space. Having never used them in Unity, I dunno if they're actual in-engine objects or not, I would assume not. They simply have a vector position and direction (either a quaternion or another vector), right?

    Rays in XNA were just a mathematical construct to make intersection tests easier, it was up to you to ensure that you were calculating your ray positions in the correct space so that collisions were meaningful. Does Unity give them an entire transform as well?
    Works the same in Unity too - so the only way is to leverage off of the transforms of surrogate objects but they don't have their own local space. You almost had me excited there.
  • farsicon said:
    Works the same in Unity too - so the only way is to leverage off of the transforms of surrogate objects but they don't have their own local space. You almost had me excited there.
    Surely it only matters what you're using them for in that case? Like, if a ray is for collision testing, then you have to calculate that ray in the coordinates that matter for physics, right?

    It can't be too hard to add a script to a GameObject that encapsulates a local ray and projects it through its parent's transform when accessed though. Or to just calculate the ray from two reference points in local space as needed.
  • dislekcia said:
    It can't be too hard to add a script to a GameObject that encapsulates a local ray and projects it through its parent's transform when accessed though. Or to just calculate the ray from two reference points in local space as needed.
    Yup, this is the way it can be simulated - using the transform.forward, etc. directly from gameobjects wherever they're nested, but only the world space is used - if you use transform.localposition for example your Ray will calculate that in world space and completely screw it up - Physics in unity is only in world space.

  • farsicon said:
    Yup, this is the way it can be simulated - using the transform.forward, etc. directly from gameobjects wherever they're nested, but only the world space is used - if you use transform.localposition for example your Ray will calculate that in world space and completely screw it up - Physics in unity is only in world space.
    Maybe I'm not understanding what you're trying to do, but physics being in world space makes sense. That's how distinct objects are going to relate to each other most of the time anyway. What are you wanting physics in local space for, exactly? Why were you getting excited?

    I've used local space rays in specific situations to calculate particle movement and emission angles before, where it makes sense to care about a mesh's local reference frame in XNA because that's how I was getting polygon intersection data back at the time.

  • I'm trying to wrap my head around everything you guys have said. I kinda get it, and I'll tinker it more to see if I can get it working... Bear with me for a bit XD

    But thanks guys, really appreciate this trying to get me to understand it thing :)
  • edited
    @dislekcia: I have no use for local space physics, but if you had found a novel way to do raytracing in true local space (rays being space aware) then that would have been interesting. But you seem to attribute the space of sub objects to your rays, which is not entirely accurate but doesn't really matter if it helps you. If you're interested, or if it will add value to this thread, I'll gladly explain why I say that :)
  • I'm still trying to understand this @_@

    OK so THIS I know functionally *works*. I use it in another script.

    bool CheckPosForBlock (Vector3 whereScreen) 
    	{
    		Ray ray = Camera.main.ScreenPointToRay (whereScreen);
    		Debug.Log (ray.direction);
    		Debug.DrawRay (Camera.main.transform.position, ray.direction * 15, Color.red, 0.5f);
    		//Debug.DrawLine(Camera.main.transform.position, Camera.main.ScreenToWorldPoint(where));
    		RaycastHit2D hit = Physics2D.GetRayIntersection (ray, Mathf.Infinity);
    		if (hit.collider != null) 
    		{
    			if (hit.collider.transform.tag == "block_dropping") 
    			{
    				return true;
    			} else
    				return false;
    		} else
    			return false;
    	}


    But:

    1. The debug DrawRay goes straight into the screen, if I debug ray.direction it's (0,0 0,0 1), which I assume is exactly... Straight ahead. Should this be the case, or is something wrong?

    2. It functionally works - I use this like so: I click, and it makes a block, then using this it checks the position of the mouse to see if there is a block there or if there's a block next to the position - if false and true then it draws another block - to make sure I'm making continuous shapes. So, I know this does work. The whereScreen that I pass it is the transform.position of a block in world space, or that with a modified x and/or y to check next to it.

    3. I'm trying to get this part understood so that I can extrapolate and use it in future, otherwise this works and I don't know how it works and I can't use it again @_@

    4. I wanted to call this exact same function from another gameObject, but it appears I can't. I've used the so-called singleton method on my master controller object

    public static MasterController instance;


    Doesn't that mean I can call this script from elsewhere by using MasterController.CheckPosForBlock? But I can't, autocomplete doesn't recognise it, so I assume something's not working, right? Why...? :/
  • Tuism said:
    1. The debug DrawRay goes straight into the screen, if I debug ray.direction it's (0,0 0,0 1), which I assume is exactly... Straight ahead. Should this be the case, or is something wrong?
    You're not using the actual ray you're calculating, only its direction, which seems to be straight "down" into the screen. That's what we'd expect from how you're calculating the ray. Why are you using the camera position instead of the ray's position?
    Tuism said:
    2. It functionally works - I use this like so: I click, and it makes a block, then using this it checks the position of the mouse to see if there is a block there or if there's a block next to the position - if false and true then it draws another block - to make sure I'm making continuous shapes. So, I know this does work. The whereScreen that I pass it is the transform.position of a block in world space, or that with a modified x and/or y to check next to it.
    Let me guess: Orhtographic camera, 2D project, set up so that 1 unit in world space = 1 pixel? You're basically setting up a special case around a lot of the logical math you need to do to make projection resolution agnostic... It works because the object's position is functionally identical to its pixel position in camera space. This is not always guaranteed! Make a standalone application and resize the window and watch it all break...
    Tuism said:
    3. I'm trying to get this part understood so that I can extrapolate and use it in future, otherwise this works and I don't know how it works and I can't use it again @_@
    Why do you think it's working? Why do you think it's not drawing the ray you want to see? It's all logical, extrapolate and tell us what you suspect is going on. Draw diagrams, whatever helps you math it out in your head.
    Tuism said:
    4. I wanted to call this exact same function from another gameObject, but it appears I can't. I've used the so-called singleton method on my master controller object

    public static MasterController instance;


    Doesn't that mean I can call this script from elsewhere by using MasterController.CheckPosForBlock? But I can't, autocomplete doesn't recognise it, so I assume something's not working, right? Why...? :/
    You've made a variable that is of the type MasterController static. That means that other classes can reference that variable by calling MasterController.instance. You also need to make that variable actually point to an object that is useful, so somewhere (usually in the Awake method of the MasterController class) you need to do something like:

    if (instance == null)
    {
        instance = this;
    }
    else
    {
        Destroy(); //or however you nuke objects in your setup
    }


    What you're probably looking for is to set the method itself as static (public static bool CheckPosForBlock (Vector3 whereScreen) ...) which would then allow other classes to reference it using MasterController.CheckPosForBlock() - a singleton doesn't magically make methods "reachable", the static keyword does. Probably a good idea to read up about static and what singletons actually do (and why people use them) before continuing. Like, you can't reference non-static variables inside a static method, why?
  • farsicon said:
    @dislekcia: I have no use for local space physics, but if you had found a novel way to do raytracing in true local space (rays being space aware) then that would have been interesting. But you seem to attribute the space of sub objects to your rays, which is not entirely accurate but doesn't really matter if it helps you. If you're interested, or if it will add value to this thread, I'll gladly explain why I say that :)
    No, please explain because I still don't understand.

    How does a mathematical construct retain any space information other than the reference frame in which you calculate it? Remember that Vector3 and Quaternion are similar such constructs, neither of them is explicitly "in world space" either. The burden is always on the mathematician to ensure that calculated values are used in the correct reference frame, switching or converting values between frames as necessary. So if you calculate a thing using transform.localposition values, it's in local space. That's what I've been saying throughout.
    Thanked by 1Chippit
Sign In or Register to comment.