In the last lesson, we looked at how open-source machine learning models in Nuke can assist with ๐ฏ๐ฎ๐๐ถ๐ฐ ๐ต๐๐บ๐ฎ๐ป ๐บ๐ฎ๐๐๐ถ๐ป๐ด. Now, weโre taking that same footage into Boris FX ๐ ๐ผ๐ฐ๐ต๐ฎ ๐ฃ๐ฟ๐ผ ๐ฎ๐ฌ๐ฎ๐ฑ to test its new ๐ ๐ฎ๐๐ธ ๐ ๐ feature.
Hereโs what it does:
Instead of manually creating a mask, ๐๐ผ๐ ๐ฎ๐ฑ๐ฑ ๐ ๐ฎ๐๐ธ ๐ ๐ ๐๐ผ ๐ฎ ๐น๐ฎ๐๐ฒ๐ฟ, ๐ฎ๐ป๐ฑ ๐ถ๐ ๐ด๐ฒ๐ป๐ฒ๐ฟ๐ฎ๐๐ฒ๐ ๐ฎ ๐บ๐ฎ๐ฐ๐ต๐ถ๐ป๐ฒ-๐ฎ๐๐๐ถ๐๐๐ฒ๐ฑ ๐บ๐ฎ๐๐๐ฒ ๐ฎ๐๐๐ผ๐บ๐ฎ๐๐ถ๐ฐ๐ฎ๐น๐น๐. No need to track points, no need to animate splines frame by frame, just let the tool do its job.
But letโs be clear: ๐๐ต๐ถ๐ ๐ถ๐ ๐ป๐ผ๐ ๐ฎ ๐บ๐ฎ๐ด๐ถ๐ฐ ๐ฏ๐๐๐๐ผ๐ป.
Since this is a matting tool, ๐๐ต๐ฒ ๐ฎ๐น๐ฝ๐ต๐ฎ ๐๐ผ๐ ๐ด๐ฒ๐ ๐ถ๐ ๐บ๐ผ๐๐๐น๐ ๐ฎ ๐๐ผ๐น๐ถ๐ฑ ๐๐ต๐ฎ๐ฝ๐ฒ, ๐ฑ๐ฒ๐๐ผ๐ถ๐ฑ ๐ผ๐ณ ๐บ๐ผ๐๐ถ๐ผ๐ป ๐ฏ๐น๐๐ฟ ๐๐ฟ๐ฎ๐ถ๐น๐. Itโs not replacing fine detail work yet, but itโs changing how we ๐ฎ๐ฝ๐ฝ๐ฟ๐ผ๐ฎ๐ฐ๐ต ๐๐ต๐ฒ ๐ฝ๐ฟ๐ผ๐ฐ๐ฒ๐๐.
And thatโs the point.
These are ๐๐ฒ๐ป ๐ญ ๐๐ผ๐ผ๐น๐. Right now, theyโre assisting. But I can already see a future where we donโt just use generic training data, we ๐๐ฟ๐ฎ๐ถ๐ป ๐ผ๐๐ฟ ๐ผ๐๐ป ๐บ๐ผ๐ฑ๐ฒ๐น๐, fine-tuned to our workflow. Where we donโt just pull from local sources, we ๐ฝ๐๐น๐น ๐ณ๐ฟ๐ผ๐บ ๐๐ต๐ฒ ๐ฐ๐น๐ผ๐๐ฑ, accessing a library of pre-trained AI masks that improve over time.
This is the beginning of ๐ฉ๐๐ซ ๐๐ต๐ถ๐ณ๐๐ถ๐ป๐ด ๐ฎ๐๐ฎ๐ ๐ณ๐ฟ๐ผ๐บ ๐ฏ๐ฟ๐๐๐ฒ ๐ณ๐ผ๐ฟ๐ฐ๐ฒ ๐บ๐ฎ๐ป๐๐ฎ๐น ๐๐ผ๐ฟ๐ธ toward ๐บ๐ฎ๐ฐ๐ต๐ถ๐ป๐ฒ-๐ฎ๐๐๐ถ๐๐๐ฒ๐ฑ ๐ฎ๐ฟ๐๐ถ๐๐๐ฟ๐.
And thatโs the real impact.
Already, these tools are allowing me to ๐๐ฝ๐ฒ๐ป๐ฑ ๐บ๐ผ๐ฟ๐ฒ ๐๐ถ๐บ๐ฒ ๐บ๐ฎ๐ธ๐ถ๐ป๐ด ๐ฐ๐ฟ๐ฒ๐ฎ๐๐ถ๐๐ฒ ๐ฑ๐ฒ๐ฐ๐ถ๐๐ถ๐ผ๐ป๐ ๐ถ๐ป๐๐๐ฒ๐ฎ๐ฑ ๐ผ๐ณ ๐ด๐ฒ๐๐๐ถ๐ป๐ด ๐๐๐๐ฐ๐ธ ๐ผ๐ป ๐๐ฒ๐ฎ๐บ๐น๐ฒ๐๐ ๐ฒ๐ ๐๐ฟ๐ฎ๐ฐ๐๐ถ๐ผ๐ป.
It feels like ๐๐ฟ๐ฎ๐๐ฒ๐น๐ถ๐ป๐ด ๐๐ผ ๐ฎ๐ป๐ผ๐๐ต๐ฒ๐ฟ ๐๐๐ฎ๐๐ฒ ๐ผ๐ฟ ๐ฐ๐ผ๐๐ป๐๐ฟ๐.
You ๐ฐ๐ฎ๐ป drive yourself there.
Or you can ๐๐ฎ๐ธ๐ฒ ๐ฎ ๐ฝ๐น๐ฎ๐ป๐ฒ ๐ฎ๐ป๐ฑ ๐ฎ ๐ฐ๐ฎ๐ฏ and get straight to where you need to be.
Thatโs what these tools are doing, theyโre ๐ฐ๐ต๐ฎ๐ป๐ด๐ถ๐ป๐ด ๐ต๐ผ๐ ๐๐ฒ ๐ด๐ฒ๐ ๐๐ต๐ฒ๐ฟ๐ฒ.
This is where ๐ ๐๐ซ (๐ ๐ฎ๐ฐ๐ต๐ถ๐ป๐ฒ ๐๐๐๐ถ๐๐๐ฒ๐ฑ ๐ฉ๐๐ซ) continues. Letโs get into it.
Watch it on YouTube (Higher Res) - https://lnkd.in/gaqAvt6g
Discussion