I am using an engine that uses a texture mapper where instead of providing per-vertex UV coordinates a 3D point(P) and 2 vectors(M, N) is provided, the texture coordinates can be directly computed from the basis vectors.
Note: from this point onwards I'll refer to PMN as 3 vectors instead of a point and 2 basis vectors
P is the origin of the texture, M the horizontal end of the texture, and N the vertical end
Then 3 new 'magic' vectors are computed:
M.sub(P)
N.sub(P)
A = P.cross(N)
B = M.cross(P)
C = N.cross(M)
Then for each pixel(x, y):
S = Vector3f(x, y, 1)
float a = dot(S, A)
float b = dot(S, B)
float c = dot(S, C)
float u = texture.width * a / c
float v = texture.height * b / c
color = texture.pixels[u + v * texture.width]
The algorithm & derivation is explained more in depth here: https://nothings.org/gamedev/ray_plane.html
Now i have found an algorithm that converts PMN vectors to per-vertex UV coordinates:
Point3D a = ... // first vertex of the triangle
Point3D b = ... // second vertex of the triangle
Point3D c = ... // third vertex of the triangle
Point3D p = ... // origin of the texture
Point3D m = ... // horizontal end of the texture
Point3D n = ... // vertical end of the texture
Point3D pM = m.subtract(p);
Point3D pN = n.subtract(p);
Point3D pA = a.subtract(p);
Point3D pB = b.subtract(p);
Point3D pC = c.subtract(p);
Point3D pMxPn = pM.crossProduct(pN);
Point3D uCoordinate = pN.crossProduct(pMxPn);
double mU = 1.0F / uCoordinate.dotProduct(pM);
double uA = uCoordinate.dotProduct(pA) * mU; // u coordinate of the first vertex
double uB = uCoordinate.dotProduct(pB) * mU; // u coordinate of the second vertex
double uC = uCoordinate.dotProduct(pC) * mU; // u coordinate of the third vertex
Point3D vCoordinate = pM.crossProduct(pMxPn);
double mV = 1.0 / vCoordinate.dotProduct(pN);
double vA = vCoordinate.dotProduct(pA) * mV; // v coordinate of the first vertex
double vB = vCoordinate.dotProduct(pB) * mV; // v coordinate of the second vertex
double vC = vCoordinate.dotProduct(pC) * mV; // v coordinate of the third vertex
I am looking for an explanation on how this PMN -> UV conversion method works.
I am also interested in how the algorithm using PMN works as i still don't intuitively understand how these texture coordinates are computed from the basis vectors, so an explanation of that would also help me a lot but is not necessary as i mostly want to understand how the PMN -> UV conversion method works
A very specific question that i have about the PMN texture mapping algorithm, is what do the a, b, c values define for each pixel? Every article i've found that describes this algorithm has simply referred to them as "magic coordinates" similarly A, B, C is simply referred to as "magic vectors" which isn't very helpful for anyone actually wanting to understand each part of the algorithm instead of just implementing it.