Where to get the scenario objects models?

Anybody konws where to get the scenario objects models like the shampoo, flowers in the bathroom like the following in Keyshot
9aeeed80bdac84a81afe634279ce855
I found a lot of models in 3ds Max files, nearly none of Keyshot, should I learn 3ds Max or Blender to improve my scenario rendering skills?

https://grabcad.com/


These are probably the big ones out there. There are a lot others out there, but these are the most popular.

1 Like

Some websites which are helpful for creating great renderings:


https://ambientcg.com/
https://texturebox.com/free
https://www.cgbookcase.com/
https://www.sharetextures.com/
https://publicdomaintextures.com/
https://patternpanda.org/
https://www.textures.com/free
https://www.poliigon.com/
https://www.cgtrader.com/

2 Likes

Besides the free sources, there are some companies that sell those models based on room etc f.e:

CGAxis: CGAxis - 3D models, PBR, HDRI for your 3D visualizations projects

Evermotion: Evermotion - High quality 3D Models for architecture, 3D projects and games (page 2 shows already the kind of things you look for)

Model+Model: Search results (modelplusmodel.com)

3DSky: 3d models - download 3dsky.org (think your example picture is from them as well)

The prices are often pretty ok and while they are mostly modelled in 3DSMax they most of the time also give you an obj/fbx export. Learning a 3D modelling package like 3DSMax is only needed if you create your own, if you’ve models which are only in 3DSMax you could also download a trial and use the KS plugin to get them to KeyShot. Must say that doesn’t really work well and textures need a lot of manual work.

1 Like

Thanks for your information, I’ve found the models that I want at CG Trader

1 Like

Thanks for your websites, I’ve found the models that I want

Thanks and another problem is that Keyshot is not like the other long-standing program like 3dmax, and Maya, it needs more groundwork than just clicking and grabbing, anyway, I’ll figure it out

3DSMax/Blender/Modo/Maya are all 3D programs with the possibility to render. More like all-in-one packages. Every noe of those programs have their strong/weak points but rendering is with most not their strong point since they are basically modellers.

That’s why most people who use one of those programs actually render with
V-Ray/D5 Render/Octane/RedShift/Corona/Houdini/KeyShot.

So basically you do need a special renderer besides every modeller, Blenders renderer is maybe an exception since it’s good/fast. KeyShot is by far the most friendly renderer I would say to use also caused by the way you create materials which is I think easier than getting it right in something like V-Ray.

I don’t really think there is an easy clicking and grabbing thing if you want high-quality results. What would make a difference is that KeyShot had some improvements on how materials from different sources would be converted to KeyShot materials.

Most of the time the models you buy have quite simple materials but there are also loads which have V-Ray materials. If KeyShot made it easier to convert those to KeyShot f.e. with the plugin, it would save a lot of time.

Past week I was working on an interior scene which was a ready made model with V-Ray materials. And with one click of the button it was converted to KeyShot. But than there were over 100 materials which only had a diffuse map and I needed manually attach the other 3-5 textures to every single material.

And while you always need to tweak materials so they fit the renderer it could be more friendlier, same with Substance Materials. But in the end no matter of your 3d-pipeline you need to figure out solutions that work well for the things you want to render. The time invested in finding the right pipeline can be a lot but in the end you don’t run into the same issues every time.

3DSMax for example has quite a lot scripts/tools/plugins you can use to convert materials to a format which might be easier for KeyShot. NVidia is currently working on Omniverse, which basically is a kind of cloud interface between all kind of 3D applications/renderers/game engines. It basically acts as intermediary between all different software so it doesn’t really matter anymore what the source of something is, that makes the need for a really complicated pipeline less important.

And there are also some file formats that try to make life easier like USD which you could see as the new fbx but with more possibilities. But it’s all in a very early stage and models you will buy will often be either 3DSMax or FBX format. For the first you need 3DSMax and the second has it’s own limitations.

1 Like

Me wondering if Keyshot and AI or ChatGPT or Midjourney are thinking about a collab where you upload a generic form and provide a few promts and it spits out the results. I personally would love something like that…it’s eventually going to be a thing. :rofl: No more material graphs. QUE THE COMMENTS…

I run the stable diffusion AI locally (something everyone can do/install). There’s an option called IMG-2-3D. Which creates an 3D object from a single image. I think it’s currently mainly a proof-of-concept but it fascinating what you can do with your normal desktop these days.

This is a bit of a different one I’ve installed but you get the idea:

ashawkey/stable-dreamfusion: Text-to-3D & Image-to-3D & Mesh Exportation with NeRF + Diffusion. (github.com)

1 Like

Yes, I know what you said about stable diffusion, I wonder what the quality of the models are?

It’s not really useable but @john.cain was wondering if that would be a thing sometime. So the basics are there. It’s amazing that if can create a 3D model from a 2D image but it does so by ‘guessing’. But compared to photogrammetry which uses an x-amount of images of an object and constructs a 3D model from those the results are cleaner.

If I model something myself I always try to make a nice clean model but I’m not good in modelling. But that doesn’t matter too much since KeyShot is not really picky on how nice a model is. If you would create models for games it’s a different story.

These days more and more things get easier, like automatic UV-mapping, automatic retopology, cleaning up models etc. in a way software gets smarter and I’m already curious where it will be in 5 years. I think it will be always valuable to have certain skills.

If you look at for example web design there are also many tools, that help you easily build a site but it’s a shame the overhead of useless code/scripts is that high. Faster internet, faster PC compensate for it but in terms of usage at datacentres/data traffic etc it’s not really efficient looking at environment etc. most automatic solutions are not that great yet but with AI it could result in maybe smarter and more efficient ways in the long run.

It’s just guesses from my side, but interesting to follow the trends.

1 Like

@john.cain I just saw a link to this article, creating 3D out of video footage using neural networks. Thought you might like it: https://research.nvidia.com/labs/dir/neuralangelo/

Thanks, I just finished watching it, it’s wonderful, but I wonder how they solve the issues of texture maps after remodelling the building or anything that grabbed from the videos or pictures, and this is the problem that I met recently, I want to know how I can transform the objects in real life into digital assets which I think it includes two parts, modelling and rendering, if you have any ideas about this, please let me know, thanks.
P.S: John Hopkins is a famous University, I’m looking forward to its next outstanding step.

I think that will always be a thing with automatically created 3d models. It’s like using the photogrammetry or lidar data as 3D model. The resulting geometry is either a bit messy or way too detailed so you have to do some retopology to get clean models to render.

Automatic retopology does get better with time and I’m sure it can make a big step if AI is involved more to recognize the kind of object so it will be smarter about what details do matter and what can be removed/simplified.

I’m far from an expert on modelling and these kinds of software but it’s really fun to try what you can do with for example a mobile phone. I once used a photogrammetry app which needs images but instead I just shot video and used every 5th frame as image input for photogrammetry. Result wasn’t that bad actually. Think these are interesting times with a lot of smarter software in the pipeline.