4 Comments
User's avatar
Andreas Welsch's avatar

A routine that’s working for me is to use these tools in the context of podcasting and writing. Generating an image for a new post and experimenting with styles and prompts. It takes longer to get the actual post published, but you can always stop at “good enough” and iterate on the next one. Similarly, if you use ChatGPT for summarizing text with a standard prompt. It might work most of the time. But when it gives you unexpected results, you can spend some time to find out why and improve again.

Expand full comment
Cornellius Yudha Wijaya's avatar

It's pretty good advice Andreas. It is certainly what I trying to do as well. To incorporate the Generative AI into my pipeline, so it's actually useful for something rather than just an experimentation.

Expand full comment
Anirban Ghatak's avatar

What’s your current Gen AI stack? What stack you got skilled on during this time?

Expand full comment
Cornellius Yudha Wijaya's avatar

Currently, I don't want to overwhelmed myself with too much stack. So, I try to break down on two topics I want the most:

text-to-text (LLMs)

text-to-image

For both topic, I focus on learning with Transformers model (https://huggingface.co/docs/transformers/index). However, I try to active in reading new research as well (Usually my source is paperswithcode); that's why I sometimes write about anything exciting in Gen AI. I recently really interested in the Agent research as well, but it's not my focus for now.

For what I try to focus, I recently focusing on LLMs app development as well with LangChain (There is a good course: https://learn.activeloop.ai/courses/langchain).

There is a course before that I try to explore for text-to-image with diffusion (https://github.com/huggingface/diffusion-models-class).

Expand full comment