AI in Marketing Research: Expert Panel Discussion

00:59:44
https://www.youtube.com/watch?v=_-jgQMA0zWo

Résumé

TLDRIl webinar ha esplorato l'uso dell'intelligenza artificiale nella ricerca di marketing, con esperti che hanno condiviso le loro esperienze e applicazioni pratiche. Sono stati discussi temi come la qualità dei dati, l'analisi qualitativa e l'uso di modelli di linguaggio per migliorare le indagini. I relatori hanno anche parlato di un prossimo summit sull'analisi e le intuizioni, dove verranno presentati ulteriori approfondimenti sull'argomento. Le preoccupazioni etiche e le sfide legate all'uso dell'AI sono state evidenziate, insieme all'importanza della progettazione dei prompt per ottenere risultati significativi.

A retenir

  • 🤖 L'AI sta trasformando la ricerca di marketing.
  • 📊 L'analisi qualitativa è più efficiente con l'AI.
  • 🛠️ Strumenti come Voxpopme sono utilizzati per l'analisi dei dati.
  • 📅 Il summit sull'analisi e le intuizioni si svolgerà dal 29 aprile al 3 maggio.
  • ⚖️ Le questioni etiche sono fondamentali nell'uso dell'AI.
  • 📝 La progettazione dei prompt è cruciale per risultati accurati.
  • 📈 L'AI può migliorare la qualità dei dati nei sondaggi.
  • 💡 L'AI può aiutare a identificare temi nei dati qualitativi.
  • 🔍 L'AI può potenzialmente sostituire i sondaggi tradizionali.
  • 💬 Le allucinazioni dell'AI possono essere mitigate con prompt ben progettati.

Chronologie

  • 00:00:00 - 00:05:00

    Benvenuti al webinar di Satu Software su AI e ricerca di marketing, con quattro ospiti che condivideranno le loro esperienze sull'uso dell'intelligenza artificiale nella ricerca. Il webinar ha registrato un alto numero di iscrizioni, evidenziando l'interesse per il tema. Ci sarà una sessione di domande e risposte alla fine, e una registrazione sarà inviata a tutti gli iscritti.

  • 00:05:00 - 00:10:00

    Brian Orm, presidente di Satu Software, introduce i relatori e anticipa che ci saranno 11 presentazioni all'imminente Analytics and Insights Summit. I relatori condivideranno le loro scoperte sull'AI e ci sarà un'opzione per accedere ai contenuti in differita per chi non può partecipare di persona.

  • 00:10:00 - 00:15:00

    Kevin Cardi, esperto di analytics, discute le sfide della qualità dei dati nelle indagini di mercato, evidenziando problemi come il calo dei tassi di partecipazione e l'aumento delle risposte non valide. Propone di rendere i sondaggi più coinvolgenti e umani attraverso l'uso dell'AI, come l'implementazione di domande dinamiche e l'analisi automatizzata.

  • 00:15:00 - 00:20:00

    Kevin condivide la sua esperienza con l'uso di AI per umanizzare i sondaggi, menzionando l'uso di ChatGPT e i modelli di linguaggio di grandi dimensioni. Sottolinea l'importanza di migliorare l'esperienza degli utenti nei sondaggi per ottenere dati di qualità superiore.

  • 00:20:00 - 00:25:00

    Mangela Budia, direttrice associata di Ipsos, presenta uno studio sull'uso dei modelli di linguaggio di grandi dimensioni per replicare i risultati di studi di congiunzione. Discute le sfide e le opportunità nell'uso di AI per analizzare le preferenze dei consumatori e l'importanza di testare diversi parametri nei modelli AI.

  • 00:25:00 - 00:30:00

    Mangela esplora l'impatto delle impostazioni di temperatura nei modelli di linguaggio e come queste influenzano le risposte. Sottolinea la necessità di comprendere come i modelli AI possano gestire design complessi e la questione del bias posizionale nelle risposte.

  • 00:30:00 - 00:35:00

    Dan Penny di Microsoft discute l'adozione dell'AI in vari casi d'uso di ricerca, inclusa l'analisi dei dati aperti e l'uso di dati sintetici. Sottolinea l'importanza di testare diversi scenari per determinare l'efficacia dell'AI nella ricerca di mercato.

  • 00:35:00 - 00:40:00

    Dan condivide che Microsoft sta sperimentando con dati sintetici e l'uso di AI per migliorare l'efficienza nella ricerca. Sottolinea che ci sono ancora molte opportunità da esplorare e che l'AI è presente in molte aree della ricerca.

  • 00:40:00 - 00:45:00

    Jeffrey Doson, professore di marketing, presenta un progetto sull'uso dell'AI generativa per creare stimoli sperimentali. Discute le sfide etiche legate all'uso di AI e alla proprietà intellettuale, evidenziando l'importanza di considerare il valore dello stile artistico.

  • 00:45:00 - 00:50:00

    Il panel discute l'importanza di testare l'AI in vari contesti e la necessità di un approccio etico nell'uso dell'AI nella ricerca. Viene sottolineato che l'AI può migliorare l'efficienza, ma ci sono anche rischi e limitazioni da considerare.

  • 00:50:00 - 00:59:44

    Il webinar si conclude con una sessione di domande e risposte, dove i relatori rispondono a domande su come l'AI sta influenzando la ricerca di mercato e quali strumenti stanno utilizzando per analizzare i dati qualitativi.

Afficher plus

Carte mentale

Vidéo Q&R

  • Qual è l'argomento principale del webinar?

    Il webinar si concentra sull'uso dell'intelligenza artificiale nella ricerca di marketing.

  • Chi sono i relatori del webinar?

    I relatori sono esperti del settore che condividono le loro esperienze con l'AI nella ricerca.

  • Quando si svolgerà il summit sull'analisi e le intuizioni?

    Il summit si svolgerà dal 29 aprile al 3 maggio.

  • Quali sono alcune delle applicazioni dell'AI nella ricerca di mercato?

    L'AI viene utilizzata per migliorare la qualità dei dati, analizzare risposte aperte e progettare sondaggi.

  • Ci sono preoccupazioni etiche riguardo all'uso dell'AI?

    Sì, ci sono molte discussioni sulle questioni etiche legate all'uso dell'AI nella ricerca.

  • Come viene utilizzata l'AI per analizzare i dati qualitativi?

    L'AI viene utilizzata per riassumere e identificare temi nei dati qualitativi.

  • Quali strumenti vengono utilizzati per l'analisi qualitativa?

    Strumenti come Voxpopme vengono utilizzati per analizzare dati qualitativi.

  • L'AI può sostituire i sondaggi tradizionali?

    Potenzialmente, ma ci sono domande su come e quando dovrebbero essere utilizzati.

  • Qual è l'importanza della progettazione dei prompt nell'AI?

    La progettazione dei prompt è cruciale per ottenere risultati accurati e significativi dall'AI.

  • Come si affrontano le allucinazioni dell'AI?

    Si possono mitigare attraverso una progettazione attenta dei prompt.

Voir plus de résumés vidéo

Accédez instantanément à des résumés vidéo gratuits sur YouTube grâce à l'IA !
Sous-titres
en
Défilement automatique:
  • 00:00:04
    okay let's get started we want to
  • 00:00:05
    welcome everyone to this satu software
  • 00:00:08
    webinar entitled Ai and marketing
  • 00:00:10
    research uh we have four amazing guests
  • 00:00:13
    on our webinar today and we'll they'll
  • 00:00:15
    share each of them will share with us
  • 00:00:17
    their experience with artificial
  • 00:00:19
    intelligence and how they are applying
  • 00:00:21
    it to research um each of them will also
  • 00:00:24
    be speaking a little later this month uh
  • 00:00:27
    at the analytics and in it Summit uh
  • 00:00:31
    that we're having so we're excited to
  • 00:00:32
    hear them now and uh in the conference a
  • 00:00:35
    little bit
  • 00:00:36
    later uh AI is a very interesting topic
  • 00:00:39
    uh actually this webinar has had the
  • 00:00:41
    second uh second highest number of
  • 00:00:44
    signups that we've ever had in a webinar
  • 00:00:46
    so lots of interest um we're excited
  • 00:00:49
    that uh you'll be able to uh hear what
  • 00:00:52
    they have to say and interact with our
  • 00:00:54
    guests I want to mention a few things
  • 00:00:56
    before we get
  • 00:00:58
    started
  • 00:01:00
    uh just a few details about the webinar
  • 00:01:02
    um we have a Q&A section here at the
  • 00:01:05
    bottom of Zoom that you can use to ask
  • 00:01:08
    questions throughout uh and then at the
  • 00:01:10
    end of the the end of the webinar we'll
  • 00:01:14
    have a Q&A uh session uh where we'll
  • 00:01:17
    we'll try to answer as many questions as
  • 00:01:19
    we can in the time allotted um also a
  • 00:01:21
    recording of This webinar will be sent
  • 00:01:23
    out to everybody who
  • 00:01:26
    registered uh little plug here for saw
  • 00:01:28
    to software uh we have an amazing survey
  • 00:01:31
    platform that we're building we're
  • 00:01:33
    really good at conjurant analysis and
  • 00:01:35
    Max sff easy to use we have incredible
  • 00:01:39
    support we're really friendly and you
  • 00:01:41
    can try Us for free at
  • 00:01:43
    discover.
  • 00:01:46
    software.com also as I mentioned this
  • 00:01:48
    analytics and insights Summit uh will be
  • 00:01:50
    having April 29th through May 3rd so
  • 00:01:53
    coming right up uh down in San Antonio
  • 00:01:56
    Texas it's a great venue uh we have
  • 00:01:59
    amazing speed ERS amazing
  • 00:02:02
    um uh schedule and everything planned
  • 00:02:05
    there come down and interact with uh
  • 00:02:07
    some of the brightest researchers in the
  • 00:02:09
    industry uh if you can't make it down
  • 00:02:11
    there uh you also can get this delayed
  • 00:02:14
    virtual access uh that that we'll we'll
  • 00:02:18
    provide for you you can see more
  • 00:02:19
    information at
  • 00:02:22
    software.com
  • 00:02:24
    conference with that I'm going to turn
  • 00:02:27
    uh the time over to Brian orm president
  • 00:02:30
    of satu
  • 00:02:32
    software super thank you Justin so we
  • 00:02:36
    put this together uh we have 11
  • 00:02:38
    presentations coming up at the analytics
  • 00:02:40
    and insight Summit on AI and our four
  • 00:02:44
    guests are four of those 11 speakers who
  • 00:02:46
    are going to give us a taste of some of
  • 00:02:49
    their findings for the AI Summit if you
  • 00:02:52
    want to see the rest of the story please
  • 00:02:55
    sign up it's not too late to sign up and
  • 00:02:57
    come and also if you can't can't come in
  • 00:03:00
    person you can sign up for the delayed
  • 00:03:02
    option which gives you the video in the
  • 00:03:04
    slides about one week after the
  • 00:03:07
    conference is over for you to be able to
  • 00:03:09
    enjoy at your own schedule and your own
  • 00:03:11
    Leisure so we've lined up these four
  • 00:03:13
    speakers really happy that they've
  • 00:03:15
    prepared I'm GNA introduce each of them
  • 00:03:18
    one at a time I'm going to give them
  • 00:03:20
    just one follow-up question uh after
  • 00:03:23
    their five minute presentations after
  • 00:03:25
    that we're going to allow the panelists
  • 00:03:28
    to kind of talk to one another ask
  • 00:03:30
    questions of one another and have about
  • 00:03:32
    a 10 or 15 minute panel discussion
  • 00:03:34
    amongst themselves and then the last 15
  • 00:03:37
    minutes or so of this webinar we're
  • 00:03:39
    going to open it up to the questions
  • 00:03:41
    that you've put in the Q&A I don't think
  • 00:03:43
    we'll be able to get to all of them but
  • 00:03:45
    we'll pick some of those and ask them of
  • 00:03:47
    the panelists and see where we get from
  • 00:03:49
    there so I'm really looking forward to
  • 00:03:51
    this and we're going to start out with
  • 00:03:53
    Kevin cardi Kevin has been in analytics
  • 00:03:57
    for over 20 years with 15 years in
  • 00:03:59
    marketing research and eight years in
  • 00:04:01
    new product development he co-founded in
  • 00:04:03
    tufi to humanize surveys using
  • 00:04:06
    technology and AI while Sim
  • 00:04:08
    simultaneously enhancing data quality
  • 00:04:10
    and depth of insights prior to launching
  • 00:04:12
    in tufi he led a technology incubator in
  • 00:04:16
    a major real estate finance company and
  • 00:04:18
    ran analytics and Innovation for aenova
  • 00:04:20
    before its acquisition by neelen he
  • 00:04:23
    earned his PhD in quantitative methods
  • 00:04:25
    and political science from MIT and has
  • 00:04:28
    published papers and P patents on
  • 00:04:30
    various research methods with that go
  • 00:04:33
    ahead and take the
  • 00:04:36
    floor hi Brian thank you it's a pleasure
  • 00:04:39
    to be here um good to meet know you guys
  • 00:04:41
    um we're covering as everyone here is a
  • 00:04:43
    lot of stuff um at the upcoming confence
  • 00:04:45
    so we're just give you a quick taste of
  • 00:04:47
    some of the the things that we'll be
  • 00:04:49
    talking about and I'm going to share
  • 00:04:50
    screens real quick um hopefully you guys
  • 00:04:55
    can all see my screen here if not let me
  • 00:04:57
    know real quick before I uh before I get
  • 00:04:59
    into it so these are some of the things
  • 00:05:01
    that we're going to be discussing things
  • 00:05:02
    like impact and Aon survey cheating
  • 00:05:04
    Dynamic questions coding open end some
  • 00:05:06
    automated insights some other things we
  • 00:05:08
    won't be talking about at least we won't
  • 00:05:10
    because other folks here on this call
  • 00:05:12
    the conference can be covering them in
  • 00:05:13
    great detail with some fantastic papers
  • 00:05:16
    and I'm certainly looking forward to it
  • 00:05:18
    to give you a little taste of the types
  • 00:05:19
    of things that we're going to be
  • 00:05:20
    covering just take a quick mental
  • 00:05:22
    Journey Back to the year 20122 feels
  • 00:05:24
    like an eternity ago remember the
  • 00:05:26
    Ukraine war just started Queen Elizabeth
  • 00:05:28
    had died inflation was running a mock in
  • 00:05:31
    the market research industry um the big
  • 00:05:34
    issue was Data quality back then and it
  • 00:05:35
    still is today um in fact it's probably
  • 00:05:37
    gotten worse things like dropping
  • 00:05:39
    participation rates Rising cheater rates
  • 00:05:41
    um professional survey takers with
  • 00:05:43
    individuals taking 50 plus surveys a day
  • 00:05:46
    um challenges with empty answers and
  • 00:05:48
    quality of content and one of the major
  • 00:05:51
    reasons that we were facing these kinds
  • 00:05:53
    of problems was survey design um if
  • 00:05:55
    you've used surveys you kind of are very
  • 00:05:58
    familiar with those types of issues
  • 00:05:59
    people people have had tremendous
  • 00:06:01
    progress in the experience with
  • 00:06:02
    computers in general um and what they
  • 00:06:05
    expect from the experiences they engage
  • 00:06:07
    in but the experiences on surveys have
  • 00:06:08
    remained kind of a little bit behind the
  • 00:06:10
    times in fact this is an actual survey
  • 00:06:12
    from 1997 if anyone know is around back
  • 00:06:15
    in the Greenfield online days and then
  • 00:06:16
    20 thou you know 2008 and 2022 over here
  • 00:06:19
    you know the progress is pretty limited
  • 00:06:21
    compared to other areas of technology
  • 00:06:23
    and so we had set off on this journey
  • 00:06:26
    asking the question how do we make
  • 00:06:27
    surveys you know dramatically more
  • 00:06:29
    engaging much better so that we get much
  • 00:06:32
    better data and we can kind of address
  • 00:06:33
    some of these issues and part of that
  • 00:06:36
    was really making surveys more human um
  • 00:06:39
    in a lot of different ways Graphics but
  • 00:06:41
    you know looking at AI we asked the
  • 00:06:42
    question can AI make surveys and other
  • 00:06:44
    types of research more human not less
  • 00:06:46
    human but more human kind of a little
  • 00:06:48
    different from some of the simulated
  • 00:06:50
    response um approaches that are out
  • 00:06:51
    there that will be discussed as well and
  • 00:06:54
    so when we first heard about chat GPT
  • 00:06:57
    back in um you know in 200 this is my
  • 00:07:00
    initial response right when I first
  • 00:07:01
    heard about chat GPT this is me after
  • 00:07:03
    using chat GPT for 30 minutes um and
  • 00:07:06
    immediately started seeing a lot of the
  • 00:07:08
    applications that you know we could
  • 00:07:10
    start experimenting with um and you know
  • 00:07:15
    and my skepticism was kind of born out
  • 00:07:17
    of this um this issue if you've been in
  • 00:07:19
    modeling you know for a period of time
  • 00:07:21
    you kind of noticed that you know we've
  • 00:07:23
    had models get bigger and bigger and
  • 00:07:25
    lots of heavy duty promises um for a
  • 00:07:27
    long period of time by the way this is
  • 00:07:29
    you know here on the bottom and this is
  • 00:07:30
    the size of the model parameter in in um
  • 00:07:33
    in the literature the largest models
  • 00:07:35
    that were being published in the
  • 00:07:36
    literature this is a log scale so you
  • 00:07:38
    know every every Point increase is like
  • 00:07:40
    10x and you see the steady increase in
  • 00:07:42
    the size of the models um over time and
  • 00:07:45
    then this you know inflection in around
  • 00:07:48
    2019 with generative Ai and then llms
  • 00:07:50
    large language models come in and you
  • 00:07:51
    see this massive jump in the size of the
  • 00:07:54
    models and I was skeptical because you
  • 00:07:56
    know big models does not necessarily
  • 00:07:57
    translate into better outcomes um and
  • 00:07:59
    gp4 by the way is up here that's how big
  • 00:08:02
    it is um literally trillions of
  • 00:08:04
    parameters and this is gpt3 compared to
  • 00:08:06
    gp4 and so you know there these huge
  • 00:08:09
    differences in models but did they make
  • 00:08:11
    an impact um and so you know one of the
  • 00:08:14
    you know the applications we looked at
  • 00:08:16
    in terms of humanizing surveys was
  • 00:08:17
    things like conversational voice um sort
  • 00:08:20
    of using AIS to actually almost converse
  • 00:08:22
    with people um as part of a survey and
  • 00:08:25
    survey questions and you know I won't
  • 00:08:27
    kind of go into details now we certainly
  • 00:08:28
    don't have time much I hope you guys get
  • 00:08:30
    to join the the final survey but I will
  • 00:08:32
    give you a problem that emerged out of
  • 00:08:34
    you know these Solutions so Solutions
  • 00:08:36
    create their own problems and it was
  • 00:08:37
    this we did a presentation um last year
  • 00:08:40
    um at at quirks event with Pepsi and you
  • 00:08:42
    know we had a a voice survey an
  • 00:08:44
    interactive voice survey and we had
  • 00:08:46
    20,000 voice responses with 240 hours of
  • 00:08:50
    content and so the next question that AI
  • 00:08:53
    creates is okay we can we can humanize
  • 00:08:56
    surveys what do we do with all of that
  • 00:08:57
    and that's another challenge that
  • 00:08:59
    convert that that llms and other
  • 00:09:01
    existing AI models um are helping us to
  • 00:09:05
    um to deal with and we'll talk about
  • 00:09:06
    different solutions that that we and
  • 00:09:08
    other companies are using to approach um
  • 00:09:11
    those problems so I certainly hope um
  • 00:09:12
    everyone gets a chance to join us
  • 00:09:14
    there's a lot to talk about um and
  • 00:09:16
    there's just so many excellent
  • 00:09:17
    presentations and the folks you know on
  • 00:09:19
    this call in the presentation are
  • 00:09:20
    definitely folks um that you know I'm
  • 00:09:22
    I'm privileged to be with and and you'll
  • 00:09:24
    definitely want to to connect
  • 00:09:26
    with thanks so much for that intro
  • 00:09:29
    and I got a question for you we're we're
  • 00:09:32
    hearing a lot of interest in AI for the
  • 00:09:34
    marketing research sector perhaps to the
  • 00:09:36
    level of hype what tasks is AI most
  • 00:09:40
    delivering on and where do you think
  • 00:09:42
    it's falling
  • 00:09:44
    flat yeah so that I mean there's a lot
  • 00:09:46
    of I just got back from Insight
  • 00:09:48
    association annual and it's been a topic
  • 00:09:50
    of conversation like every conference
  • 00:09:52
    this year and you know last year as well
  • 00:09:54
    um so there's there's a lot of Focus
  • 00:09:56
    right now on coding data which I think
  • 00:09:59
    is certainly one of the you know the
  • 00:10:01
    biggest opportunities so you know how do
  • 00:10:02
    we de with large volumes of what we used
  • 00:10:04
    to be dark data and then now that we can
  • 00:10:06
    deal with that how does that change the
  • 00:10:08
    types of data that we want to to connect
  • 00:10:11
    with there's um a lot of use cases on
  • 00:10:14
    creating um sort of better content on
  • 00:10:17
    improving efficiency in the research
  • 00:10:19
    process I me that's everything from like
  • 00:10:21
    writing rfps to you know summarizing
  • 00:10:24
    content there have been a lot of um or
  • 00:10:27
    has been a lot of investment in
  • 00:10:29
    Improvement in um areas around
  • 00:10:33
    qualitative analysis um in particular
  • 00:10:36
    you know if you have 20 Idis um you know
  • 00:10:39
    you can just out of the box use existing
  • 00:10:42
    technology um to summarize that content
  • 00:10:45
    and then uh and then you know reduce
  • 00:10:47
    that so really speeding up and and
  • 00:10:49
    improving the operational efficiency of
  • 00:10:50
    qualitative research so those those have
  • 00:10:52
    been some areas that I think are widely
  • 00:10:54
    used right now I think we're still we're
  • 00:10:57
    still scratching the surface of the
  • 00:11:00
    things we can do with it um and you know
  • 00:11:04
    I know we're experimenting with ways of
  • 00:11:07
    just radically changing surveys um
  • 00:11:09
    things like brand tracking with
  • 00:11:11
    open-ended analysis um or or emotional
  • 00:11:14
    initation so what what is enabled by the
  • 00:11:17
    technology out there is you know is are
  • 00:11:21
    tools and processes that then give us
  • 00:11:23
    abilities to collect data in different
  • 00:11:26
    ways and to analyze that data in
  • 00:11:27
    different ways I think that's kind of
  • 00:11:28
    the the long poll um but in the short
  • 00:11:31
    term the operational efficiency
  • 00:11:32
    particularly in the coding and the open
  • 00:11:33
    data is huge and constructing surveys is
  • 00:11:36
    huge um in just writing surveys so you
  • 00:11:38
    know people were discussing um use cases
  • 00:11:41
    where they take an existing survey and
  • 00:11:43
    say hey I have this new topic take my
  • 00:11:46
    prior survey and just adapt it for this
  • 00:11:49
    new domain this new vertical and just
  • 00:11:51
    like you know save them a couple hours
  • 00:11:53
    worth of work so I think those are a lot
  • 00:11:55
    of the use cases people are discussing
  • 00:11:56
    these
  • 00:11:58
    days
  • 00:12:00
    very
  • 00:12:01
    interesting thank you so much for that
  • 00:12:04
    we're going to go ahead and move on to
  • 00:12:06
    mangela budia she is an associate
  • 00:12:09
    director in the advanced analytics team
  • 00:12:11
    at ipsos her area of expertise lies in
  • 00:12:14
    advanced analytics across a wide range
  • 00:12:17
    of multivariate techniques she
  • 00:12:18
    specializes in Choice models such as
  • 00:12:20
    conjoint and Max diff mangela has over
  • 00:12:24
    20 years in research experience working
  • 00:12:26
    with leading clients within a range of
  • 00:12:28
    Industries and is respons responsible
  • 00:12:29
    for Designing the advanced methodologies
  • 00:12:31
    to ensure that all objectives are met
  • 00:12:34
    and actionable insights can be taken
  • 00:12:36
    from the
  • 00:12:37
    findings go ahead mangela tell us a
  • 00:12:39
    little bit about what you're going to be
  • 00:12:41
    talking about at the upcoming SAU
  • 00:12:43
    software analytics and insight
  • 00:12:45
    Summit thank you very much Brian so I'm
  • 00:12:48
    just gonna um share my screen to give
  • 00:12:51
    you an idea of the things that we we've
  • 00:12:53
    been going to be sharing at the summit
  • 00:12:55
    next next month so hopefully you can all
  • 00:12:58
    see my screen at the
  • 00:13:00
    moment so the rise of large yeah the
  • 00:13:03
    rise of large language models has led to
  • 00:13:05
    a growing interest in their uses for
  • 00:13:08
    data analytics within market research
  • 00:13:11
    our paper titled the machines are here
  • 00:13:13
    but will they take over what we did was
  • 00:13:15
    we conducted a large research exercise
  • 00:13:18
    involving um quarter of a million
  • 00:13:21
    generated AI responses across a diverse
  • 00:13:24
    set of scenarios and what we're doing is
  • 00:13:26
    looking at their ability to replicate
  • 00:13:28
    the results of previous conin and Maxi
  • 00:13:32
    studies one of the main motivations
  • 00:13:34
    behind this
  • 00:13:40
    paper one of the main motivations behind
  • 00:13:42
    this paper was a previous paper that
  • 00:13:45
    explored the use of GPT 3.5 to mimic
  • 00:13:48
    consumer behavior um they tested the
  • 00:13:52
    consistencies of these models responses
  • 00:13:54
    to these four economic theories and they
  • 00:13:57
    concluded that large language models
  • 00:13:59
    could broadly serve as a toour for
  • 00:14:02
    understanding consumer preferences and
  • 00:14:04
    and they found that the behaviors were
  • 00:14:06
    consistent with economic theory however
  • 00:14:09
    there were some other conclusions um two
  • 00:14:12
    of the main conclu other conclusions
  • 00:14:14
    were that there was a positional bias in
  • 00:14:17
    that the first concept was selected more
  • 00:14:19
    often than the other Concepts that were
  • 00:14:21
    presented in these large language models
  • 00:14:24
    and it was very sensitive to the prompts
  • 00:14:26
    that it was
  • 00:14:27
    given so it led just to you know
  • 00:14:30
    although that paper laid a lot of the
  • 00:14:32
    groundwork for us we actually had a lot
  • 00:14:34
    of questions that we needed answers for
  • 00:14:36
    so we put together these hypothesis to
  • 00:14:38
    test can large language models handle
  • 00:14:41
    more complex designs do different models
  • 00:14:44
    impact Choice responses now that we've
  • 00:14:47
    seen such a rise in the different types
  • 00:14:49
    of large language
  • 00:14:51
    models Within These models there's a
  • 00:14:54
    temperature setting which controls for
  • 00:14:57
    the randomness and the variability of
  • 00:15:00
    the models responses so we wanted to
  • 00:15:02
    better understand how this impacts
  • 00:15:04
    performance and we also wanted to look
  • 00:15:07
    at you know what's the best way to
  • 00:15:08
    prompt the models and also understand
  • 00:15:12
    how of a big issue the positional bias
  • 00:15:16
    is we also wanted to look at you know if
  • 00:15:19
    we ran analysis at a respondent
  • 00:15:23
    individual respondent level could we
  • 00:15:25
    achieve the same level of
  • 00:15:26
    differentiation as we would get from
  • 00:15:28
    real studies and what would happen if we
  • 00:15:31
    were to train large language models with
  • 00:15:33
    external kind of
  • 00:15:36
    results ultimately as a commercial
  • 00:15:38
    organization the critical hypothesis was
  • 00:15:41
    that do the results derived from large
  • 00:15:43
    language models provide the same
  • 00:15:45
    commercial insights as a study with real
  • 00:15:48
    respondents so with these hypothesis in
  • 00:15:51
    mind we started off with exploratory
  • 00:15:54
    first phase the this phase was focused
  • 00:15:57
    around testing these experimental
  • 00:15:59
    factors we took three commercial data
  • 00:16:01
    sets you and took kind of a random
  • 00:16:04
    sample of 500 respondents from each of
  • 00:16:06
    those data sets and we looked at three
  • 00:16:09
    different types of large language models
  • 00:16:10
    we looked at GPT 4 claw 2 and Gemini Pro
  • 00:16:13
    we also looked at temperature settings
  • 00:16:15
    and what implications varing those
  • 00:16:17
    temperature settings would do so here
  • 00:16:19
    the lower value leads to more
  • 00:16:21
    deterministic responses where higher
  • 00:16:23
    values would then lead to more diverse
  • 00:16:25
    responses and the prompts we fed were
  • 00:16:28
    fixed prompts where we'd developed
  • 00:16:30
    personas around some of their
  • 00:16:32
    demographics and behaviors which were
  • 00:16:34
    incorporated into the prompts and all
  • 00:16:36
    the tasks were submitted as a single
  • 00:16:38
    prompt in
  • 00:16:40
    wo this first stage allowed us to narrow
  • 00:16:43
    down the focus onto what we called the
  • 00:16:46
    phase two of the research so what we did
  • 00:16:49
    was we took the best performing large
  • 00:16:51
    language model and parameter settings
  • 00:16:53
    and we designed experiments around
  • 00:16:56
    trying to refine the models to see if we
  • 00:16:59
    can get better models better accuracy of
  • 00:17:01
    the models the three main areas that we
  • 00:17:03
    refined the models were refining The
  • 00:17:05
    Prompt so looking
  • 00:17:08
    at how could we change the prompts could
  • 00:17:11
    we simplify the prompts in any way what
  • 00:17:13
    would happen if we Chang the Persona
  • 00:17:15
    text would that impact the responses
  • 00:17:17
    that we were getting the second thing we
  • 00:17:19
    looked at was training the models what
  • 00:17:21
    we how we did that was we actually fed
  • 00:17:23
    the um the the large language models
  • 00:17:26
    answers from previous uh respond and we
  • 00:17:29
    said this is how respondents have
  • 00:17:31
    answered previously please take this
  • 00:17:32
    into consideration when you are
  • 00:17:34
    responding to the tasks that you're
  • 00:17:36
    given and thirdly we then looked at the
  • 00:17:39
    positional bias where we random
  • 00:17:41
    randomized and rotated the order of the
  • 00:17:44
    concepts to see if that would make a
  • 00:17:46
    difference in the responses that we
  • 00:17:49
    got so where are we today with answering
  • 00:17:52
    you know large language models to
  • 00:17:54
    answering Choice
  • 00:17:55
    tasks large language models have shown
  • 00:17:58
    impressive cap capabilities and there is
  • 00:18:00
    still a lot of things that we need to
  • 00:18:02
    consider and learn to unlock their use
  • 00:18:04
    cases while mitigating their risks and
  • 00:18:06
    limitation this is actually an AI
  • 00:18:08
    generated image and you know it's done a
  • 00:18:11
    really good job of generating the lead
  • 00:18:14
    Terminator but if you look around the
  • 00:18:16
    image you may find some imperfections my
  • 00:18:19
    colleagues Chris and cam will be
  • 00:18:20
    discussing these imperfections and also
  • 00:18:23
    the answers to the hypothesis um that we
  • 00:18:25
    looked at testing as well at the
  • 00:18:28
    Analytics Insight Conference next month
  • 00:18:30
    um so please do sign up if you want to
  • 00:18:33
    hear some of the answers that we have um
  • 00:18:36
    and some of the the insights that we
  • 00:18:38
    want to share with you thank you
  • 00:18:41
    fantastic Mula I got a question for you
  • 00:18:44
    how how much are clients asking for AI
  • 00:18:49
    solutions for marketing research or is
  • 00:18:51
    it something you're seeing mainly being
  • 00:18:54
    driven or recommended to clients by
  • 00:18:56
    consulting firms and research providers
  • 00:19:00
    um we definitely do have um clients that
  • 00:19:03
    are more interested in this and want to
  • 00:19:05
    partner with us um in in kind of almost
  • 00:19:07
    co-developing Solutions so you know we
  • 00:19:11
    are also developing our own Solutions as
  • 00:19:14
    well so we have um we're looking at
  • 00:19:17
    developing a an AI based chat bot um and
  • 00:19:20
    we've got a number of initiatives that
  • 00:19:21
    we're running um currently with clients
  • 00:19:24
    so you know we're also looking at new
  • 00:19:26
    types of work that can be looked out in
  • 00:19:29
    terms of um you know Vision AI where
  • 00:19:33
    we're looking at AI to process images
  • 00:19:36
    and extract relevant information such as
  • 00:19:38
    branding um Ambiance and things like
  • 00:19:41
    that so we definitely have a lot of
  • 00:19:43
    interested parties um wanting to know
  • 00:19:46
    where this is going um and I think you
  • 00:19:49
    know with the the buzzword of everyone
  • 00:19:51
    talking about um generative AI these
  • 00:19:54
    large language models I think clients
  • 00:19:56
    are definitely showing or coming to us
  • 00:19:59
    um to help you know help to develop some
  • 00:20:03
    Partnerships where they can better
  • 00:20:05
    understand how they can move forward
  • 00:20:06
    with within this
  • 00:20:08
    landscape so I haven't seen your slides
  • 00:20:11
    yet you you looked at three different
  • 00:20:13
    conjoin analysis studies to try to see
  • 00:20:16
    if uh llms can can do a decent job of
  • 00:20:19
    replicating what real respondents did
  • 00:20:21
    could could you give us kind of a thumbs
  • 00:20:23
    up a thumbs to the side or a thumbs down
  • 00:20:25
    about what about how well it's doing in
  • 00:20:27
    its current technology
  • 00:20:30
    State
  • 00:20:33
    um how about that okay a little bit like
  • 00:20:37
    that now at this point I think I think
  • 00:20:40
    you know we we do need to understand
  • 00:20:42
    that you know these models are trained
  • 00:20:45
    on we don't know the source of the
  • 00:20:47
    information in terms of where they're
  • 00:20:48
    trained and and how they're trained and
  • 00:20:50
    and that's all very much unknown so you
  • 00:20:53
    know we understand that the models as
  • 00:20:56
    they grow in size um sometimes that you
  • 00:20:59
    know it may not be that the fact that we
  • 00:21:01
    need larger models but more models that
  • 00:21:04
    are more tailored towards
  • 00:21:07
    specific um targeted to kind of more
  • 00:21:10
    specific areas more maybe even
  • 00:21:12
    categories especially within the the
  • 00:21:14
    market research industry so right I I
  • 00:21:17
    know your colleague I was having an
  • 00:21:18
    email with Chris Moore and he commented
  • 00:21:21
    to me he says well you know if you're
  • 00:21:23
    talking about a a a widely known and
  • 00:21:26
    wide widely talked about topic on
  • 00:21:28
    internet such as electric vehicles in
  • 00:21:30
    the USA uh then llms might be able to do
  • 00:21:34
    pretty well but he said but what about
  • 00:21:36
    light bulbs in Estonia and that's what
  • 00:21:39
    he he brought up as an example is okay
  • 00:21:42
    it can only answer based on what it's
  • 00:21:44
    trained upon and if we're talking about
  • 00:21:46
    light bulbs in Estonia how well is it
  • 00:21:48
    going to do compared to a well-known and
  • 00:21:50
    well addressed topic on the internet
  • 00:21:52
    such as electric vehicles in the United
  • 00:21:55
    States yeah fantastic thank you mangela
  • 00:21:57
    let's move on to Dan
  • 00:21:59
    Penny Dan penny is a research and
  • 00:22:01
    insights director at Microsoft his team
  • 00:22:04
    supports monetization and business
  • 00:22:06
    planning which drives decisions on
  • 00:22:08
    business models packaging and pricing
  • 00:22:11
    he's worked in various research roles
  • 00:22:12
    since joining Microsoft in 2004 from
  • 00:22:15
    supporting Azure and Microsoft 365
  • 00:22:18
    product marketing to corporate issues
  • 00:22:20
    like public policy engagement before
  • 00:22:22
    Microsoft he worked at research
  • 00:22:24
    international ultimately part of canar
  • 00:22:26
    TNS and he also has a doctorate in 17th
  • 00:22:30
    century French religious history with
  • 00:22:34
    that Dan please take the stage oh thank
  • 00:22:37
    you Brian and it's great to be here and
  • 00:22:40
    yes if only that doctorate was in
  • 00:22:42
    generative Ai and conjoint I'd be much
  • 00:22:45
    better off um but really looking forward
  • 00:22:47
    to uh the Sal tooth conference uh we
  • 00:22:50
    actually have two presenters at the
  • 00:22:52
    conference um because in Microsoft I
  • 00:22:54
    would say we're really experimenting in
  • 00:22:55
    and seeing adoption of AI for a broad
  • 00:22:58
    range of research use cases so those
  • 00:23:00
    flow across surveying uh guide design
  • 00:23:03
    surveys themselves with chatbots uh
  • 00:23:05
    analysis of open-ended data like Kevin
  • 00:23:07
    talked about audio and video as well uh
  • 00:23:10
    and then cross project synthesis and
  • 00:23:12
    insights and actually Barry Jennings is
  • 00:23:13
    going to be talking about that bigger
  • 00:23:14
    picture at the conference and really our
  • 00:23:16
    overall construct thinking about AI uh
  • 00:23:19
    which is really a key Initiative for our
  • 00:23:20
    research team this year so AI generated
  • 00:23:23
    synthetic data which I'll be talking
  • 00:23:25
    about at the conference along with two
  • 00:23:27
    of my colleagues is really one aspect of
  • 00:23:29
    that Bader AI initiative and actually
  • 00:23:31
    one of our speakers is uh is Jimbo brand
  • 00:23:33
    who uh manua mentioned his paper earlier
  • 00:23:36
    on on col joy in one of those early
  • 00:23:38
    papers um so at the conference we'll be
  • 00:23:40
    talking about our general approach uh
  • 00:23:43
    here around AI synthetic data and in
  • 00:23:45
    learnings from five different
  • 00:23:46
    experiments that we've done um I would
  • 00:23:48
    say um just like usually talked about
  • 00:23:50
    like we're really on a journey here uh
  • 00:23:52
    and for us sort of being on the client
  • 00:23:54
    side that's really involved uh vendor
  • 00:23:56
    Outreach uh partnering with several uh
  • 00:23:59
    research firms and AI Specialists and
  • 00:24:01
    then developing our own in-house dools
  • 00:24:03
    um I'd say there's a lot of academic
  • 00:24:04
    literature too and so we're trying to
  • 00:24:06
    sort of take that into account so for
  • 00:24:08
    instance the way that um in some papers
  • 00:24:11
    uh have talked about uh the way that
  • 00:24:12
    Chad gbt might mainly focus on
  • 00:24:14
    maximizing expected payoffs rather than
  • 00:24:17
    doing what humans do which is often
  • 00:24:19
    acting uh in a risk averse way for gains
  • 00:24:22
    uh and risk-seeking for
  • 00:24:24
    losses so I would say it's very early
  • 00:24:26
    days we've really barely left the sh
  • 00:24:29
    uh We've we've barely left Tatooine uh
  • 00:24:31
    pick your metaphor um I do think though
  • 00:24:33
    it is helpful to have a framework for
  • 00:24:35
    thinking about the different types of AI
  • 00:24:37
    generated data experiments and we'll
  • 00:24:39
    talk about that at the conference
  • 00:24:41
    because we really think that a single
  • 00:24:43
    experiment proves almost nothing um it's
  • 00:24:46
    rather along the lines of what Chris
  • 00:24:47
    Chapman actually has talked about Chris
  • 00:24:49
    Chapman and Google has talked about um
  • 00:24:51
    so I think having a framework where we
  • 00:24:53
    have Dimensions like you know who so
  • 00:24:56
    what is the audience is it already a
  • 00:24:57
    mainstream audience is it that Niche
  • 00:25:00
    audience of light bulb buyers in Estonia
  • 00:25:03
    as Brian you were mentioning uh so the
  • 00:25:05
    who the what is it a familiar issue or a
  • 00:25:08
    simple question or is it testing new
  • 00:25:10
    value really new product value or a
  • 00:25:12
    complex packaging question they're
  • 00:25:14
    really LED themselves to a conjoint and
  • 00:25:16
    then there the how Dimension how are we
  • 00:25:18
    going to use this uh prec is it is it
  • 00:25:20
    just for general direction is it a
  • 00:25:23
    business critical topic is it something
  • 00:25:25
    where we need a ton of precision uh is
  • 00:25:27
    it to help pre tal cross stimul like we
  • 00:25:29
    have a lot of use cases so I really
  • 00:25:32
    think of it sort of as a cube with these
  • 00:25:33
    dimensions and we need to test in really
  • 00:25:36
    all the different spaces in that Cube to
  • 00:25:39
    figure out what are The Right Use cases
  • 00:25:41
    and uh um and how can we actually use it
  • 00:25:43
    for the
  • 00:25:44
    business uh and I would say that when
  • 00:25:46
    assessing results we need to look pretty
  • 00:25:48
    closely at the kind of measures that a
  • 00:25:50
    number of folks have talked about like
  • 00:25:52
    is there stability is there validity in
  • 00:25:55
    that it agrees with things it should
  • 00:25:56
    agree with and disagree with things
  • 00:25:58
    where it should differ and what kind of
  • 00:26:00
    distribution do we see in the data um
  • 00:26:02
    and actually I think there's sort of the
  • 00:26:04
    question about whether we'd really
  • 00:26:05
    expect similar results to a human given
  • 00:26:08
    the way that we know that humans work
  • 00:26:10
    with conjoins so for instance with
  • 00:26:12
    attribute nonattendance does gbt
  • 00:26:15
    essentially provide the attribute
  • 00:26:16
    nonattendance in a similar way so having
  • 00:26:18
    said that I'll just turn briefly to our
  • 00:26:20
    findings uh your sort of Romanesque
  • 00:26:23
    thumbs up thumbs down uh some of the
  • 00:26:25
    work that we have done is sort of
  • 00:26:26
    conjoint related to all touch on that uh
  • 00:26:28
    first I guess on the good side we have
  • 00:26:30
    seen gbt out of the box can can work
  • 00:26:32
    well on those public known topics your
  • 00:26:34
    car example Brian uh so topics where
  • 00:26:37
    we're asking things like PC form factor
  • 00:26:39
    preferences or expectations of device
  • 00:26:41
    costs on the consumer side um secondly
  • 00:26:44
    from some conjoint work that it can do a
  • 00:26:46
    decent job in simulating willingness to
  • 00:26:48
    pay and feature importances at least in
  • 00:26:51
    a couple of scenarios with well-known
  • 00:26:52
    consumer products um and actually third
  • 00:26:55
    that gbt can give answers that
  • 00:26:56
    correspond more closely to reality than
  • 00:26:58
    some surveys where really humans can
  • 00:27:00
    might get confused or find it hard to
  • 00:27:02
    render judgment so that's sort of the
  • 00:27:04
    good news I guess the the the thumbs up
  • 00:27:06
    bit on the thumbs down bit I would say
  • 00:27:08
    that we see that um gbt out of the box
  • 00:27:11
    as particularly out of the box can be
  • 00:27:12
    just too optimistic or Tech forward uh
  • 00:27:14
    so for instance asking it on the
  • 00:27:17
    question about whether AI will actually
  • 00:27:19
    be a positive influence on people's
  • 00:27:20
    lives uh gbt and a rather
  • 00:27:22
    self-interested way says yes more so
  • 00:27:24
    than people uh second that it struggles
  • 00:27:27
    with topics that are really distant from
  • 00:27:28
    The Prompt information or from what's in
  • 00:27:31
    public domain uh so for instance brand
  • 00:27:34
    attributes uh we tend to see a lot
  • 00:27:36
    higher agreement with those than humans
  • 00:27:39
    uh and canar talks about something
  • 00:27:40
    similar and it was really unsuccessful
  • 00:27:43
    with much more sophisticated scenarios
  • 00:27:45
    where we're testing say new products
  • 00:27:46
    value in the commercial space to a
  • 00:27:49
    specific audience like security decision
  • 00:27:51
    makers so I would say in general we
  • 00:27:53
    really see sort of the most promise with
  • 00:27:55
    particular scenarios and uh having the
  • 00:27:58
    right horse for the course uh so what
  • 00:28:00
    might work in a particular consumer
  • 00:28:03
    scenario might not work in a commercial
  • 00:28:05
    scenario where we might need a much more
  • 00:28:08
    trained model leveraging internal data
  • 00:28:10
    and Survey data to pre-build those uh
  • 00:28:13
    that model and with fine-tuning maybe
  • 00:28:15
    for specific tasks and so for instance
  • 00:28:18
    in another experiment we did see that by
  • 00:28:20
    leveraging rag uh we could see actually
  • 00:28:22
    very similar utilities to our human data
  • 00:28:24
    even in that somewhat more complex case
  • 00:28:27
    um but I would say even there we're very
  • 00:28:28
    much sort of on the journey and uh we
  • 00:28:31
    expect to be doing a lot more
  • 00:28:32
    experimenting I think we're we're really
  • 00:28:34
    one of those firms which would be in the
  • 00:28:36
    that bucket of uh being doing a lot of
  • 00:28:39
    Outreach uh so to folks like manua to
  • 00:28:42
    understand what they're doing and and
  • 00:28:43
    looking for partnership um so yeah we're
  • 00:28:46
    looking forward to continuing that
  • 00:28:47
    Journey thank you so much so can can you
  • 00:28:51
    give a an example of a specific instance
  • 00:28:53
    in which AI contributed significantly to
  • 00:28:55
    a strategy or research effort at
  • 00:28:57
    Microsoft so far yeah it's it's an
  • 00:29:00
    interesting question Brian because in
  • 00:29:01
    some ways it's suffused almost
  • 00:29:02
    everywhere I would say so almost
  • 00:29:05
    everything that we do at some level I
  • 00:29:06
    would say incorporate and not everything
  • 00:29:08
    but a large proportion so for instance
  • 00:29:11
    uh I would say an awful lot now of the
  • 00:29:13
    cases where we have open-ended data
  • 00:29:15
    we're either experimenting with a bot in
  • 00:29:18
    the survey or we're using it for coding
  • 00:29:20
    so for instance um our sort of main
  • 00:29:24
    customer and partner satisfaction survey
  • 00:29:26
    50,000 open then did twice a year we are
  • 00:29:30
    using uh you know using it for coding in
  • 00:29:32
    that case um I think we have something
  • 00:29:34
    like 80 odd projects in the first half
  • 00:29:37
    where we're using AI so it is suffused
  • 00:29:40
    everywhere but on the other hand I think
  • 00:29:41
    there are a lot of specific use cases
  • 00:29:43
    where we haven't yet said uh like with
  • 00:29:46
    synthetic data from con joints that
  • 00:29:48
    we're confident enough to step away and
  • 00:29:51
    not do the human survey we're much more
  • 00:29:53
    in the experiment do parallels and I
  • 00:29:56
    think really the question is
  • 00:29:59
    um what's the what's the right series of
  • 00:30:02
    tests for a particular scenario that we
  • 00:30:04
    given us confidence to actually take
  • 00:30:07
    that step away um and you know you can
  • 00:30:09
    imagine that's not going to be the most
  • 00:30:12
    the most business critical topics to
  • 00:30:14
    begin with it's likely to be sort of
  • 00:30:15
    lower Ling fruit um but yeah I would say
  • 00:30:17
    otherwise AI especially in coding in
  • 00:30:21
    qual sort of summarization is more
  • 00:30:24
    everywhere than nowhere um actually yeah
  • 00:30:27
    and when you see coding you're not
  • 00:30:28
    meaning
  • 00:30:30
    programming programming languages you're
  • 00:30:32
    talking about coding of open-end content
  • 00:30:35
    right yeah exactly like uh with video um
  • 00:30:38
    and so on so you can imagine I think the
  • 00:30:39
    typical call Project now is somewhat
  • 00:30:41
    different than it used to be in the
  • 00:30:42
    sense of the ability to do very quick
  • 00:30:45
    turn summaries from each uh interview
  • 00:30:47
    right uh the ability to then do thematic
  • 00:30:50
    views like it it I think it's
  • 00:30:53
    democratizing at least access to the
  • 00:30:55
    material much more quickly uh giving us
  • 00:30:58
    uh an ability to report earlier on what
  • 00:31:00
    we're seeing so yeah I would say that's
  • 00:31:02
    um yeah but that's what I meant by the
  • 00:31:03
    coding so I mean we're doing we're using
  • 00:31:05
    it for coding too but that's sort of a
  • 00:31:06
    separate
  • 00:31:07
    bucket thanks so much our last speaker
  • 00:31:11
    is uh Dr Jeffrey doson he's a professor
  • 00:31:15
    of marketing at the Marriott School of
  • 00:31:16
    Business at BYU Jeff received his PhD in
  • 00:31:20
    quantitative marketing from the Fisher
  • 00:31:21
    College of Business at Ohio State
  • 00:31:24
    University his research focuses on the
  • 00:31:26
    development and applic ation of basian
  • 00:31:28
    statistical methods to a variety of
  • 00:31:30
    theoretical and applied marketing and
  • 00:31:32
    management problems Jeff has taught
  • 00:31:35
    courses in marketing research marketing
  • 00:31:37
    analytics pricing strategy customer
  • 00:31:39
    relationship management survey research
  • 00:31:41
    Advanced analytics and generative
  • 00:31:43
    artificial intelligence so I suppose
  • 00:31:45
    that's going to make you an expert on
  • 00:31:47
    the topic go ahead
  • 00:31:49
    Jeff well thanks Brian let me share my
  • 00:31:52
    screen expert expert is a strong word I
  • 00:31:55
    don't think I fall into the expertise
  • 00:31:56
    category um so I'm I'm excited to
  • 00:31:59
    present this I'm I'm chairing uh a
  • 00:32:00
    session as part of the academic track at
  • 00:32:03
    the conference um that includes three
  • 00:32:05
    speakers all speaking about the use of
  • 00:32:06
    generative artificial intelligence in
  • 00:32:07
    conjoint uh we have IA Israeli whose
  • 00:32:10
    paper on GPT for marketing research has
  • 00:32:12
    me been mentioned a couple of times
  • 00:32:14
    she's from the Harvard Business School
  • 00:32:15
    she'll be presenting to us uh Nino Hart
  • 00:32:17
    who used to be at Ohio State is now with
  • 00:32:18
    skim is presenting um another paper on
  • 00:32:21
    on using large language models to
  • 00:32:23
    explore U the effects of product
  • 00:32:25
    introduction exploring sort of
  • 00:32:26
    interesting heterogen distrib tions and
  • 00:32:28
    then I'm presenting something a little
  • 00:32:29
    bit different and so my my paper is
  • 00:32:31
    called creating experimental stimuli
  • 00:32:33
    with generative AI um this is the
  • 00:32:35
    outcome of a project that I've been
  • 00:32:36
    working on for the past year or so uh
  • 00:32:38
    which is to say I've been doing a thing
  • 00:32:40
    I like that thing a lot but I'm not sure
  • 00:32:42
    if it's the right thing to do uh it's
  • 00:32:43
    co-authored with with Roger Bailey from
  • 00:32:45
    Ohio State and so the um the project
  • 00:32:47
    we've been working on um is uh that
  • 00:32:49
    motivated this is related to this
  • 00:32:51
    question of what is the the value of
  • 00:32:52
    artistic style so these generative AI
  • 00:32:55
    systems large language models and text
  • 00:32:56
    to image generators they're trained on
  • 00:32:58
    enormous data sets that were largely
  • 00:33:00
    scraped from the internet so it has
  • 00:33:02
    access to information that's part of the
  • 00:33:03
    public domain some information that is
  • 00:33:05
    uh that's probably privately held and it
  • 00:33:07
    may be a violation of of intellectual
  • 00:33:09
    property laws um artists in particular
  • 00:33:11
    uh they're super concerned about the way
  • 00:33:13
    that text to image generation uh may
  • 00:33:15
    maybe misappropriate their style and
  • 00:33:17
    potentially replace them in terms of the
  • 00:33:19
    the work they do as artists and so our
  • 00:33:21
    paper explores this issue lots of
  • 00:33:23
    lawsuits going on right now uh we
  • 00:33:25
    explore a variety of different potential
  • 00:33:27
    remediations uh to this problem um and
  • 00:33:29
    one of those is the potential to pay
  • 00:33:30
    artists a royalty uh for for the use of
  • 00:33:33
    their style and maybe a fee for for
  • 00:33:35
    being included within the training data
  • 00:33:37
    set used to create these these models um
  • 00:33:39
    and so to to assess um a royalty
  • 00:33:42
    structure we need to understand what is
  • 00:33:43
    the incremental value of a particular
  • 00:33:45
    artist style um as it relates to to like
  • 00:33:48
    preference willingness to pay in a
  • 00:33:49
    commercial context and to get there this
  • 00:33:51
    is this is a great application of conoid
  • 00:33:53
    analysis and so um so I could do this a
  • 00:33:55
    couple of ways so one is I I could
  • 00:33:57
    create a conid study where I use uh
  • 00:33:59
    verbal descriptions of products and so
  • 00:34:00
    in this case we're doing preference for
  • 00:34:02
    like vinyl stickers that you might stick
  • 00:34:03
    on like your bumper or like your Stanley
  • 00:34:05
    or your C toop carrier and I could
  • 00:34:07
    actually describe like what is the
  • 00:34:08
    subject of the sticker so what's the
  • 00:34:10
    topic who is the artist and then I can
  • 00:34:12
    give a price point and so in this case I
  • 00:34:13
    might say like the subject is a cat in a
  • 00:34:15
    cup and the artist is alons Muka and the
  • 00:34:18
    price point is 149 and I could collect
  • 00:34:20
    data about this but it creates a lot of
  • 00:34:22
    problems right because I I might know
  • 00:34:23
    the subject I might not know the artist
  • 00:34:25
    but I have no idea of like what the
  • 00:34:26
    sticker actually looks like that matters
  • 00:34:28
    a lot uh and maybe a better way to do
  • 00:34:30
    this is to actually um create those
  • 00:34:32
    Concepts themselves and so in this case
  • 00:34:34
    I've got the same information embodied
  • 00:34:36
    Within These images that i' I've created
  • 00:34:38
    um in this case using mid Journey so
  • 00:34:40
    I'll show you the sticker or uh a
  • 00:34:41
    concept of the sticker uh a price point
  • 00:34:43
    and I'll have you pick the one you like
  • 00:34:44
    the best um so what we're effectively
  • 00:34:47
    doing here is we're creating our
  • 00:34:49
    experimental design in the space of the
  • 00:34:50
    generative prompt and so I'm laying out
  • 00:34:52
    my attributes and levels within the
  • 00:34:54
    prompt that's being used to create these
  • 00:34:55
    images so create an image of of create a
  • 00:34:58
    sticker of a subject in the style of an
  • 00:34:59
    artist so for example create a sticker
  • 00:35:02
    of a cat in the cup in the style of
  • 00:35:04
    alfons Muka and these systems are really
  • 00:35:06
    phenomenal at generating this this uh
  • 00:35:08
    this this this type of object this this
  • 00:35:10
    this the stimulite the challenge is that
  • 00:35:11
    if I just create one of these things um
  • 00:35:14
    um I'm I'm Bound by basically the fixed
  • 00:35:16
    effect of that object so the interaction
  • 00:35:17
    of the topic with the artist and then
  • 00:35:19
    that particular realization from from
  • 00:35:21
    the stochastic generator and so my
  • 00:35:23
    intuition around this is I have to
  • 00:35:24
    create lots of versions of these things
  • 00:35:26
    and so I can create many versions of of
  • 00:35:29
    of stickers generated from the same
  • 00:35:30
    basic prompt um and I can use that to
  • 00:35:33
    sort of back into like what like what is
  • 00:35:34
    the the value of the artistic style of
  • 00:35:35
    of Muka for example um and so we're
  • 00:35:38
    using gen AI to create stimuli um as
  • 00:35:41
    I've been doing this uh it's starting to
  • 00:35:43
    apply a lot of different contexts um
  • 00:35:45
    it's applies to scenarios where verbal
  • 00:35:47
    descriptions of products are are
  • 00:35:48
    difficult or effortful for consumers to
  • 00:35:50
    evaluate um humans as we know are better
  • 00:35:52
    at responding to very specific things as
  • 00:35:54
    opposed to these abstract Concepts and
  • 00:35:56
    so we're finding applications in concept
  • 00:35:58
    testing and package design and product
  • 00:35:59
    listing photos uh social media posts
  • 00:36:02
    advertising copy which would be a textto
  • 00:36:04
    text example um lots of applications but
  • 00:36:07
    this raises a bunch of of issues uh
  • 00:36:10
    related to kind of the measurement of of
  • 00:36:12
    these effects and so we're effectively
  • 00:36:14
    creating a new measurement scale and we
  • 00:36:15
    need to be able to evaluate the
  • 00:36:16
    properties of that scale specifically
  • 00:36:19
    with respect to like the validity and
  • 00:36:20
    reliability of what we're measuring um
  • 00:36:23
    how do we deal with the fact that gen is
  • 00:36:24
    intrinsically stochastic which is to say
  • 00:36:26
    there's a one to many mapping between
  • 00:36:28
    the semantic information in a prompt and
  • 00:36:30
    then the images or text that's generated
  • 00:36:32
    accordingly I think I think
  • 00:36:32
    theoretically it's a one to infinite
  • 00:36:34
    mapping between uh The Prompt and what
  • 00:36:36
    can be created um these these images and
  • 00:36:38
    text they generated from distributions
  • 00:36:40
    of unknown form can we learn something
  • 00:36:41
    about those
  • 00:36:42
    distributions um is it even possible to
  • 00:36:45
    orthogonally manipulate multiple
  • 00:36:46
    features within the same prompt like
  • 00:36:47
    images are are multi-dimensional they're
  • 00:36:49
    evocative they're easy to respond to but
  • 00:36:51
    can I really manipulate artistic style
  • 00:36:53
    and and the base image uh simultaneously
  • 00:36:56
    um if not can I find ways to estimate
  • 00:36:58
    what is the correlation between those
  • 00:37:00
    things and find some way to sort of
  • 00:37:01
    debias by by understanding how those
  • 00:37:03
    things are associated with with uh with
  • 00:37:05
    each other in the embeding space so the
  • 00:37:07
    paper we're presenting is uh is
  • 00:37:09
    addressing these and and potentially
  • 00:37:11
    many other issues um our goal is
  • 00:37:13
    basically to provide a protocol that can
  • 00:37:15
    be used by researchers to implement and
  • 00:37:16
    to also justify this this type of
  • 00:37:18
    measurement system where we're building
  • 00:37:20
    the experimental design in the space of
  • 00:37:21
    the generative prompt and now we're
  • 00:37:22
    creating these really interesting
  • 00:37:24
    evocative things that consumers can
  • 00:37:25
    respond to um really excited to talk
  • 00:37:27
    about
  • 00:37:29
    it fantastic Jeff I got a followup
  • 00:37:32
    question for you I I imagine that
  • 00:37:34
    universities are rapidly adjusting to
  • 00:37:36
    the influence of AI and education what
  • 00:37:38
    are some of the major initiatives for
  • 00:37:42
    professors surrounding Ai and teaching
  • 00:37:44
    students about marketing strategy and
  • 00:37:46
    analytics so I I don't know if we have
  • 00:37:48
    any university-wide uh strategies I
  • 00:37:50
    think there's a lot of experimentation
  • 00:37:51
    that's being done by by individual
  • 00:37:53
    instructors and I'd say in the business
  • 00:37:54
    school we're maybe a little more excited
  • 00:37:56
    about this than my my colleagues like in
  • 00:37:57
    English they're they're kind of worried
  • 00:37:58
    about what this does to their ability to
  • 00:38:00
    teach writing for example um for an
  • 00:38:03
    illustration like I I teach a class
  • 00:38:04
    right now on um it's the Core Business
  • 00:38:07
    analytics class required for all of our
  • 00:38:08
    our first year MBA students and we've
  • 00:38:10
    been doing um all of the the coding so
  • 00:38:12
    actual computer coding and and execution
  • 00:38:15
    within the premium chat GPT platform and
  • 00:38:17
    so I'm having students uh train complex
  • 00:38:20
    machine learning and statistical models
  • 00:38:22
    and then to be able to query those
  • 00:38:23
    models to do kind of like uh decision
  • 00:38:25
    theoretic stuff after the fact and so
  • 00:38:27
    that's been an experiment that I've done
  • 00:38:29
    for the first time this year it's been
  • 00:38:31
    it's been really effective to be honest
  • 00:38:32
    it's worked really well although there
  • 00:38:33
    are some challenges challenges with it
  • 00:38:36
    um it's a nice way to get sort of
  • 00:38:37
    non-technical students up to speed with
  • 00:38:39
    respect to coding um I'm also teaching a
  • 00:38:41
    class on generative artificial
  • 00:38:42
    intelligence for marketing productivity
  • 00:38:44
    and in that class our students are just
  • 00:38:45
    basically building a business using gen
  • 00:38:47
    tools including text to text generators
  • 00:38:49
    and text to image generators and a
  • 00:38:51
    little bit of coding on top of it and
  • 00:38:53
    that's also been a really interesting
  • 00:38:54
    experiment so to answer your question um
  • 00:38:56
    lots of people are trying lots of
  • 00:38:57
    different things I don't think we've
  • 00:38:58
    settled yet on what the best approach is
  • 00:39:01
    um but there is a lot of opportunity and
  • 00:39:02
    prob probably a lot of drawbacks too but
  • 00:39:04
    um but I think most of us are pretty
  • 00:39:05
    pretty excited about
  • 00:39:07
    it super thanks so much why don't all of
  • 00:39:10
    you go ahead and and all of you
  • 00:39:12
    panelists go ahead and turn on your
  • 00:39:14
    videos and it we've been going pretty
  • 00:39:16
    well here I think we have about five
  • 00:39:18
    minutes for you guys maybe to have a
  • 00:39:21
    follow-up comment or to address a
  • 00:39:23
    question to another panelist and then
  • 00:39:25
    with that we'll turn it over to just who
  • 00:39:27
    will pull some questions from the
  • 00:39:29
    audience
  • 00:39:33
    Q&A well I'm happy to start Brian with
  • 00:39:35
    maybe with a question uh maybe to manua
  • 00:39:38
    and Kevin in particular I guess um one
  • 00:39:40
    one of the things that we're interested
  • 00:39:41
    in is obviously we have sort of in the
  • 00:39:43
    context of conjoint because we do a lot
  • 00:39:45
    of conjoint um we're semi- obsessed with
  • 00:39:48
    conjoint I would say in uh in our work
  • 00:39:50
    at Microsoft um uh whether you're doing
  • 00:39:53
    much work where it's not about synthetic
  • 00:39:56
    design but about hey how AI might be
  • 00:39:58
    leveraged in the conjoint itself so for
  • 00:40:00
    instance with asking through a chatbot
  • 00:40:04
    uh more open-ended data uh slowing
  • 00:40:07
    somebody down as part of the conjoint
  • 00:40:09
    itself so that AI can do a better job
  • 00:40:12
    maybe in actually improving the data
  • 00:40:13
    quality from the conjoint and
  • 00:40:15
    potentially the open-ended it's being
  • 00:40:16
    folded into the actual model itself as
  • 00:40:19
    as part of those parameters or where for
  • 00:40:21
    instance AI might be part of the actual
  • 00:40:23
    model building or the design generation
  • 00:40:25
    in the first place I was just kind of
  • 00:40:26
    curious if you've done much any sort of
  • 00:40:28
    work around that area as part of the
  • 00:40:30
    conjoin
  • 00:40:33
    work uh you're you're on
  • 00:40:36
    mute oh thank you in terms of directly
  • 00:40:39
    the open ends into con the conjoint
  • 00:40:41
    model directly I'd say no not quite yet
  • 00:40:44
    um there has been a fair amount of work
  • 00:40:47
    in terms of sort of AI interactive why
  • 00:40:49
    and dynamic probing using Ai and the
  • 00:40:51
    impact that has on the quality of data
  • 00:40:53
    not just of the response which is
  • 00:40:55
    significant but also on actually the
  • 00:40:56
    rest of the survey where people realize
  • 00:40:59
    that wow you're actually paying
  • 00:41:00
    attention to me ask me why and they then
  • 00:41:03
    start paying more attention as you go
  • 00:41:05
    through um and there have been
  • 00:41:07
    significant improvements in like the
  • 00:41:08
    overall quality of the rest of the
  • 00:41:10
    survey and I don't think I've seen that
  • 00:41:12
    specifically in the context of condrin
  • 00:41:14
    yet but I have to imagine that would be
  • 00:41:16
    one of the areas where it would have a
  • 00:41:18
    huge Improvement precisely because
  • 00:41:20
    condrin was an ative sometimes
  • 00:41:22
    repetitive task as you go through Choice
  • 00:41:24
    sets yeah we had done one experiment we
  • 00:41:27
    certainly have been seeing that in the
  • 00:41:28
    experiment that it at least in that one
  • 00:41:30
    experiment that it did improve the sort
  • 00:41:32
    of the the overall data quality because
  • 00:41:34
    it I think because of that exact issue
  • 00:41:36
    that it slowed people down and they feel
  • 00:41:38
    like somebody's actually sort of rather
  • 00:41:39
    like in a if it was sort of more
  • 00:41:41
    qualitative interview that somebody's
  • 00:41:43
    actually sort of paying attention to
  • 00:41:44
    them um in a dynamic way
  • 00:41:51
    so thanks again my pleasure we have
  • 00:41:55
    question we have time for yet another
  • 00:41:57
    question among the panelists or comment
  • 00:42:01
    I'll throw up there has anyone um been
  • 00:42:03
    experimenting on or with with simulated
  • 00:42:06
    respondents on emotional responses um
  • 00:42:10
    and comparison between say functional
  • 00:42:12
    and emotional traits you know whether or
  • 00:42:14
    not um simulated respondents behave
  • 00:42:16
    differently than real people when
  • 00:42:17
    they're evaluating things with you know
  • 00:42:19
    in a more emotional context
  • 00:42:26
    specifically
  • 00:42:30
    I can I can I can maybe touch on that I
  • 00:42:33
    I think I mean I know that there's been
  • 00:42:34
    quite a lot of literature around that I
  • 00:42:36
    think har mine for instance I've seen
  • 00:42:38
    some work they've done I think actually
  • 00:42:39
    abos has done some work on that too I
  • 00:42:41
    think um I think the thing that we have
  • 00:42:44
    seen at least is that I'm not sure if
  • 00:42:46
    this quite in that emotional bucket but
  • 00:42:48
    the the the emotional reaction to
  • 00:42:51
    Brands is sort of is a little different
  • 00:42:54
    with gbt versus human so we do see in
  • 00:42:56
    general this sort of much higher level
  • 00:42:58
    of sort of emotional brand
  • 00:43:01
    Association um with gbt as opposed to
  • 00:43:04
    human so in general like our brand
  • 00:43:06
    scores if all our brand surveys were via
  • 00:43:09
    gbt our brand scores would be through
  • 00:43:11
    the roof um but so with everybody else's
  • 00:43:13
    and so it would be a great way of uh
  • 00:43:15
    really meeting your brand targets for
  • 00:43:16
    the year um so yeah I think that's like
  • 00:43:20
    how you sort of can fine-tune the
  • 00:43:23
    emotional response there I think I think
  • 00:43:25
    yeah that's something we need to sort of
  • 00:43:26
    do more
  • 00:43:30
    more well super maybe at this point we
  • 00:43:33
    we let Justin dig into the bag of
  • 00:43:36
    questions and start pulling out a few to
  • 00:43:39
    submit to the panelists for their
  • 00:43:41
    thoughts yeah before I do just a quick
  • 00:43:45
    plug here to check out our our software
  • 00:43:48
    um we are working hard to make an
  • 00:43:51
    amazing system we'd like you to take a
  • 00:43:53
    look at it at discover. software.com and
  • 00:43:56
    also this analytics and insight Summit
  • 00:43:58
    right uh April 29th through May 3rd San
  • 00:44:01
    Antonio if you can't make it down there
  • 00:44:04
    uh check out these uh this virtual
  • 00:44:05
    access option and you can see that at
  • 00:44:08
    satus
  • 00:44:09
    software.com onference
  • 00:44:12
    okay well uh we have a bunch of
  • 00:44:15
    questions let's see here um uh mangela
  • 00:44:19
    there's a question for you uh in their
  • 00:44:20
    paper did they only focus on temperature
  • 00:44:23
    as a prompt parameter or also explore
  • 00:44:26
    other prompt components parameters such
  • 00:44:29
    as top P best of frequency or uh
  • 00:44:33
    presence penalty if not what was the
  • 00:44:35
    reason to focus on temperature as a
  • 00:44:37
    parameter
  • 00:44:40
    only so um I
  • 00:44:44
    think
  • 00:44:46
    it's the temperature setting is quite
  • 00:44:49
    well known from the original paper that
  • 00:44:51
    we we looked at which um they only
  • 00:44:54
    looked at one temperature setting um I
  • 00:44:57
    think they use a temperature setting of
  • 00:44:58
    one um in that paper so what we wanted
  • 00:45:01
    to do was we wanted to really understand
  • 00:45:04
    if you varied that temperature setting
  • 00:45:07
    you know what impact would that have
  • 00:45:08
    then on the results so we ran you know
  • 00:45:11
    over kind of 50 experiments just looking
  • 00:45:13
    at those different temperature
  • 00:45:17
    settings okay thank you uh Jeff one for
  • 00:45:20
    you um how competent will students using
  • 00:45:23
    generative AI to code be at debugging or
  • 00:45:26
    Rec izing errors are we engineering a
  • 00:45:29
    black box culture yeah that's a great
  • 00:45:31
    question so these are MBA students and
  • 00:45:33
    so they tend not to be super competent
  • 00:45:34
    with respected coding to begin with um
  • 00:45:36
    the way I view it so so in in chat GPD
  • 00:45:39
    with her code interpreter what it what
  • 00:45:40
    it will do is it it writes code in
  • 00:45:41
    Python it does execution in browser and
  • 00:45:43
    then allows them to query those results
  • 00:45:45
    using natural language and so I would
  • 00:45:46
    view this uh as maybe a way to get
  • 00:45:48
    people into coding uh maybe not a
  • 00:45:50
    replacement for coding but a way to get
  • 00:45:51
    them into coding it feels a lot like
  • 00:45:53
    learning to code by recording like
  • 00:45:54
    macros in VBA and then doing Ting after
  • 00:45:57
    the fact within within Excel um and so
  • 00:45:59
    anything I can do to get my students
  • 00:46:01
    interested in analytics and and coding
  • 00:46:04
    and and Analysis I think is I think is
  • 00:46:06
    useful um but again I I there there are
  • 00:46:09
    lots of like intended consequences lots
  • 00:46:10
    of unintended consequences I think at
  • 00:46:12
    this point we we don't know
  • 00:46:15
    um great thank you another question for
  • 00:46:18
    the group uh do we know if how llms
  • 00:46:23
    prioritize newer
  • 00:46:25
    information
  • 00:46:32
    maybe what do you mean by newer
  • 00:46:37
    information I would think that it might
  • 00:46:40
    be talking about okay was this was this
  • 00:46:44
    information that I scraped information
  • 00:46:47
    that was written this year or previous
  • 00:46:48
    year and is our LM our llm is going to
  • 00:46:52
    going to wait the more recent
  • 00:46:54
    information that it scrapes higher than
  • 00:46:56
    older
  • 00:46:59
    information I I don't believe that's the
  • 00:47:01
    way they function right so the
  • 00:47:02
    construction of the llm itself I don't
  • 00:47:03
    think prioritizes the fre the recency of
  • 00:47:05
    the of the data a lot of these systems
  • 00:47:07
    now they they interact with the internet
  • 00:47:09
    and so when they produce responses
  • 00:47:10
    they'll do an internet search and
  • 00:47:11
    provide things that maybe are a little
  • 00:47:12
    more contemporaneous but that's not part
  • 00:47:14
    of the construction of the of the large
  • 00:47:16
    language model
  • 00:47:18
    itself one of the second that one of the
  • 00:47:21
    caveat though is that um llms when
  • 00:47:23
    they're referencing the training data um
  • 00:47:26
    they do recognize that some of the
  • 00:47:28
    training data references previous data
  • 00:47:29
    sufferance that there's a medical
  • 00:47:31
    article that says this is old data and
  • 00:47:33
    refers to a prior article right it's an
  • 00:47:36
    establishing that the newer article is
  • 00:47:38
    you know more recent and more relevant
  • 00:47:40
    so in that sense even though it may not
  • 00:47:42
    be specifically trained to to prioritize
  • 00:47:45
    newer information the content itself may
  • 00:47:48
    have the effect of prioritizing newer
  • 00:47:51
    information great thank you uh question
  • 00:47:54
    for Dan what are some of the tools you
  • 00:47:56
    have used to help you distill
  • 00:47:58
    qualitative findings are there
  • 00:48:00
    limitations to what kind of qual data
  • 00:48:02
    you are willing to feed into the AI for
  • 00:48:05
    example pre-release marketing
  • 00:48:09
    content uh yeah so we um yeah we do
  • 00:48:12
    there are some particular tools so we
  • 00:48:14
    actually uh for instance have a
  • 00:48:15
    partnership with Vox pop me uh and so we
  • 00:48:19
    um we're basically leveraging Fox pop me
  • 00:48:22
    as a platform and uh Ro Romani uh from
  • 00:48:26
    uh RT is actually talking a number of
  • 00:48:28
    conferences I think with Vox pop me so
  • 00:48:30
    yeah so I would say there is sort of a
  • 00:48:32
    particular platform that we we we're
  • 00:48:33
    using a lot for um for that um for video
  • 00:48:37
    audio and so on um and then in terms of
  • 00:48:40
    material yeah we we're obviously we're
  • 00:48:43
    also very very careful about so what
  • 00:48:45
    material we use and where we use it so
  • 00:48:47
    for instance uh we do have some guidance
  • 00:48:50
    uh uh this is for anything that we do
  • 00:48:52
    actually about what which llm it's it's
  • 00:48:56
    used on uh so for instance you know we
  • 00:48:58
    don't want to put anything uh on an llm
  • 00:49:01
    that's sort of public like we'll want to
  • 00:49:03
    run it on Azure um in a confidential
  • 00:49:06
    setting so that nothing goes to a to
  • 00:49:10
    another external public LM so we're very
  • 00:49:12
    actually careful uh about we have a lot
  • 00:49:15
    of guidance that we've given to people
  • 00:49:17
    about what you can and can't test and
  • 00:49:19
    how you can and can't test it um so yeah
  • 00:49:22
    I'd say we're actually sort of very
  • 00:49:23
    careful about that what we don't want is
  • 00:49:25
    a whole bunch of material going
  • 00:49:27
    including from respondents right going
  • 00:49:29
    out into a public domain so everything
  • 00:49:31
    is in contained
  • 00:49:33
    environments great thanks another
  • 00:49:35
    question here about AI hallucinations it
  • 00:49:38
    was mentioned that qual qualitative
  • 00:49:40
    analysis has been more efficient now
  • 00:49:43
    that we have gen AI have there been any
  • 00:49:46
    issues with AI hallucinating when
  • 00:49:49
    creating summaries or identifying themes
  • 00:49:52
    from your
  • 00:49:55
    knowledge
  • 00:49:58
    yes there are a lot of stories about
  • 00:50:00
    that um qualitative researchers who've
  • 00:50:02
    seen that basically summarize it that um
  • 00:50:05
    and the common refrain is that the llm
  • 00:50:07
    models are trained to be very helpful
  • 00:50:09
    and they're so helpful they're so eager
  • 00:50:12
    to make you happy that when there's
  • 00:50:13
    nothing in there to justify a particular
  • 00:50:15
    finding like find a quote that justifies
  • 00:50:17
    this finding it will happily make up
  • 00:50:19
    that quote um now there are ways to
  • 00:50:22
    address that and a lot of it is prompt
  • 00:50:23
    engineering so if you engineer your
  • 00:50:25
    prompt to specifically tell it not to do
  • 00:50:27
    that you can protect yourself to some
  • 00:50:29
    degree from certain types of
  • 00:50:30
    hallucination but it's certainly
  • 00:50:32
    something you have to be cautious
  • 00:50:34
    about yeah and I think just to add to
  • 00:50:36
    that on the sort of helpful side I mean
  • 00:50:38
    I think an example is where if you if
  • 00:50:40
    for instance you might say in part of
  • 00:50:41
    the prompt you know choose one to three
  • 00:50:43
    things it will tend to pick three things
  • 00:50:46
    uh it will it will try and be as helpful
  • 00:50:47
    as possible and the human won't do that
  • 00:50:49
    and so you get sort of a different
  • 00:50:50
    distribution of counts uh in in cases
  • 00:50:54
    like that so yeah so I think there are
  • 00:50:56
    the the those kind of differences around
  • 00:50:57
    helpfulness I'm not sure about I mean
  • 00:50:59
    I'm sure we have examples of
  • 00:51:00
    hallucination but certainly that sort of
  • 00:51:02
    helpfulness aspect uh is is definitely
  • 00:51:06
    that okay uh with the development of gen
  • 00:51:08
    AI tools do you think there will be
  • 00:51:11
    potentially a data collection tool that
  • 00:51:14
    can replace
  • 00:51:20
    surveys I
  • 00:51:24
    mean
  • 00:51:25
    maybe
  • 00:51:28
    if you want I I'll just answer so so
  • 00:51:30
    partly yes I mean I think the question
  • 00:51:32
    is going to be a little different right
  • 00:51:34
    um because we can gather brand equity
  • 00:51:37
    and we can run a lot of the surveys we
  • 00:51:38
    run through opened analysis or just
  • 00:51:40
    conversational tools and generate
  • 00:51:42
    incredible data and quantify it at
  • 00:51:44
    Scales of n equals a th000 n equals
  • 00:51:46
    2,000 the question is what will surveys
  • 00:51:48
    look like and one of the big questions I
  • 00:51:50
    think we're going to be asking ourselves
  • 00:51:52
    is not can I gather this data through an
  • 00:51:55
    open and end method because previously
  • 00:51:57
    you know for 50 years has been what I
  • 00:51:58
    you know I can't do this too costly to
  • 00:52:00
    code open ends Etc but should I um and I
  • 00:52:04
    think that that the comparison will be
  • 00:52:06
    much more closer to something like um
  • 00:52:08
    when you're trying to gather brand
  • 00:52:10
    awareness and you do aided awareness
  • 00:52:11
    versus uned awareness right uned
  • 00:52:13
    awareness is what's top of mind what do
  • 00:52:15
    people think about aided awareness is
  • 00:52:17
    here's here's 20 different claims I
  • 00:52:19
    could make which of these claims is most
  • 00:52:22
    compelling and and you you know that's P
  • 00:52:25
    that's a close in an answer so the
  • 00:52:26
    question is what are we trying to do
  • 00:52:28
    with it so it's no longer what are we
  • 00:52:30
    capable of doing or what is cost
  • 00:52:32
    effective because the cost is going to
  • 00:52:34
    be reduced across the board but what do
  • 00:52:36
    we want to do what type of information
  • 00:52:39
    how do we want to engage people what do
  • 00:52:40
    we want to elicit from their minds again
  • 00:52:43
    just a that's a very high LEL thought
  • 00:52:46
    thank you I do think as well that we're
  • 00:52:48
    so far away from that that it's I I
  • 00:52:52
    understand the question but it's also
  • 00:52:53
    this it's such a sort of theoretical
  • 00:52:55
    question that I think it's sort of not
  • 00:52:57
    really top of mind for us at least right
  • 00:52:59
    now I I think for us it's more about um
  • 00:53:02
    what what are the tools that can make us
  • 00:53:04
    more effective um in terms of you know
  • 00:53:07
    uh Speed and Agility and so on and so on
  • 00:53:09
    uh and then sort of this exploration
  • 00:53:11
    around synthetic because um because
  • 00:53:14
    coming back to Brian's question earlier
  • 00:53:16
    it's not like we're remotely in a
  • 00:53:18
    position where we would have a single
  • 00:53:19
    use case I think the synthetic where
  • 00:53:21
    right now we would we would say we'll do
  • 00:53:23
    this as opposed to the human um you know
  • 00:53:25
    it be so limited and so yeah we just
  • 00:53:29
    quite a long way I think from
  • 00:53:31
    that great uh another question there's a
  • 00:53:34
    lot of them here by the way and um we're
  • 00:53:36
    probably not going to be able to answer
  • 00:53:38
    all of them uh except for maybe our
  • 00:53:40
    panelists will be willing to type in
  • 00:53:42
    some answers uh if you interact with the
  • 00:53:44
    Q&A section there uh towards the end
  • 00:53:46
    anyway here's another one do any of
  • 00:53:49
    y'all have recommendations for a solid
  • 00:53:52
    primer one two hours worth of learning
  • 00:53:55
    on prompt engine engering for beginners
  • 00:53:58
    there are tons of resources out there
  • 00:54:00
    but it's a bit overwhelming to put it
  • 00:54:04
    mildly I can make a recommendation so
  • 00:54:07
    Ethan mullik at at Wharton has done a
  • 00:54:09
    lot of work on on providing sort of
  • 00:54:12
    practical um tips on how how to use
  • 00:54:14
    these systems so I would take a look at
  • 00:54:15
    like gu Twitter feed is really useful um
  • 00:54:17
    he he's published a book just recently
  • 00:54:19
    that might be useful uh he was recently
  • 00:54:21
    on a podcast with Ezra Klein where they
  • 00:54:23
    talk about ways to use GPT um to use
  • 00:54:26
    large language models in general um I
  • 00:54:28
    think his stuff is is pretty fantastic
  • 00:54:30
    can you say the name again slowly Ethan
  • 00:54:33
    mullik don't ask I don't e t h an I
  • 00:54:37
    think it's m o l l i CK
  • 00:54:40
    maybe he's on the faculty at at the
  • 00:54:42
    Wharton School at the University of
  • 00:54:44
    Pennsylvania another kind of related
  • 00:54:46
    question about you know kind of coming
  • 00:54:48
    up to speed for researchers new to this
  • 00:54:50
    topic are there any review papers or
  • 00:54:53
    knowledge SharePoint that introduces or
  • 00:54:56
    summarizes the different research
  • 00:54:58
    efforts used in developing simulated
  • 00:55:05
    respondents I mean I think there's a lot
  • 00:55:07
    of papers out there I think one one
  • 00:55:09
    that's quite good uh I just have it on
  • 00:55:11
    another screen actually um is maybe I'll
  • 00:55:14
    put it into the chat but there was one
  • 00:55:16
    about using large language models to
  • 00:55:17
    generate silicon samples challenges
  • 00:55:19
    opportunities and guidelines I thought
  • 00:55:21
    that was a pretty helpful sort of review
  • 00:55:22
    of academic literature um but there are
  • 00:55:25
    quite a few of those sort of academic
  • 00:55:26
    papers I think challenge for somebody
  • 00:55:28
    like me on the client side right without
  • 00:55:30
    that muscle a historic muscle around
  • 00:55:32
    this stuff is just sort of understanding
  • 00:55:34
    it and wrapping your head around that
  • 00:55:35
    but that but that I thought was actually
  • 00:55:37
    quite a good sort of overview
  • 00:55:39
    paper great um another question here
  • 00:55:42
    about uh can anyone rub over the topic
  • 00:55:45
    about ethics in
  • 00:55:51
    AI there's a massive conversation about
  • 00:55:53
    it at every single level the insights
  • 00:55:55
    asso
  • 00:55:56
    within companies I'm sure Microsoft has
  • 00:55:58
    it so it's it's just huge it's too big
  • 00:56:01
    of a topic to do in one
  • 00:56:04
    minute agreed I mean my my paper that I
  • 00:56:07
    I alluded to in my presentation is
  • 00:56:09
    motivated around one of these ethical
  • 00:56:10
    issues which is you know um where where
  • 00:56:13
    does this training data come from who
  • 00:56:14
    owns it what is owed as a result of
  • 00:56:16
    owning it I think there are tons of
  • 00:56:18
    lawsuits going on right now to try to
  • 00:56:19
    settle these issues and that's that
  • 00:56:21
    those are like the legal questions like
  • 00:56:22
    the ethical questions are even more
  • 00:56:23
    nuanced to to deal with so lots of
  • 00:56:26
    questions lots of great conversation um
  • 00:56:29
    but too much too much for a minute I
  • 00:56:30
    agree with Kevin uh Jeff quick question
  • 00:56:33
    for you how do you handle disclosure of
  • 00:56:36
    AI generated images for the respondent
  • 00:56:39
    do you tell them previous to the task
  • 00:56:42
    and do you think there would be a
  • 00:56:43
    different a difference in responses to
  • 00:56:45
    the choice tasks between people that
  • 00:56:47
    know it is AI generated versus those
  • 00:56:50
    that don't uh yes I mean there are
  • 00:56:52
    people that um that like AI generated
  • 00:56:54
    stuff and some that dislike it um a lot
  • 00:56:55
    of people that don't care um in that
  • 00:56:57
    paper we do three different studies in
  • 00:56:59
    the first study I don't believe we tell
  • 00:57:00
    them that the photos are AI generated in
  • 00:57:02
    the other two studies we we do tell them
  • 00:57:04
    um and in terms of the dominant effect
  • 00:57:06
    dominant effect that we care about which
  • 00:57:07
    is like the willingness to pay for
  • 00:57:08
    artistic style uh that's preserved
  • 00:57:10
    across those different uh permutations
  • 00:57:12
    of the study um but again it depends on
  • 00:57:15
    what we're trying to capture the last
  • 00:57:16
    study we're doing we want to find out
  • 00:57:17
    like are people willing to pay for uh
  • 00:57:20
    like artistic compensation so would they
  • 00:57:21
    would they pay royalty to be able to to
  • 00:57:23
    to compensate an artist if if the the
  • 00:57:25
    artist generated using their style
  • 00:57:27
    within a a text to image Ai and the
  • 00:57:29
    answer to that question is is pretty
  • 00:57:31
    strongly yes um and then we do some
  • 00:57:33
    exploration around how much they would
  • 00:57:34
    be willing to pay and that's also kind
  • 00:57:35
    of
  • 00:57:36
    interesting okay I'll do one last
  • 00:57:39
    question and then if if you can stay
  • 00:57:41
    along along around a little bit longer
  • 00:57:43
    to kind of type in some answers to
  • 00:57:45
    questions we have a lot more in here uh
  • 00:57:47
    that would be helpful but we understand
  • 00:57:49
    everybody has uh stuff coming up last
  • 00:57:51
    question from our friend Leonard kale
  • 00:57:53
    how do you get segments interpreted with
  • 00:58:00
    AI so we did a we did a project in my
  • 00:58:02
    class um a couple weeks ago on
  • 00:58:04
    segmentation where I had the students uh
  • 00:58:06
    you know I think we just applied K means
  • 00:58:08
    within Chachi BT so it writes python
  • 00:58:10
    code it does execution in browser
  • 00:58:11
    generates segments and then we just uh
  • 00:58:13
    we just asked chat GPT like characterize
  • 00:58:16
    each of these segments and it goes into
  • 00:58:18
    the verba and responses it goes into
  • 00:58:19
    like the actual uh actual data um and
  • 00:58:22
    does does a fairly fairly decent job um
  • 00:58:25
    at at kind of a high level of describing
  • 00:58:27
    who these individuals are and how they
  • 00:58:28
    differ from one another um and so I
  • 00:58:30
    guess maybe lender's question just just
  • 00:58:31
    ask it um and do some experimentation
  • 00:58:33
    around it and see how it
  • 00:58:38
    performs go ahead see there's there's
  • 00:58:41
    also been applications for people have
  • 00:58:42
    asked it simply to build segments from
  • 00:58:44
    qualitative data it's like here's a
  • 00:58:46
    bunch of qualitative data give me three
  • 00:58:48
    or four segments there's more Randomness
  • 00:58:50
    in that in terms of if you were to like
  • 00:58:52
    send that request a second time you
  • 00:58:53
    could get different segments so just re
  • 00:58:56
    nice you know there's it's not it's not
  • 00:58:58
    you know it is there's some arbitrary
  • 00:59:00
    arbitr and it's not
  • 00:59:03
    magical okay with that I again if you
  • 00:59:05
    can uh panelists kind of hang around to
  • 00:59:07
    type in some answers that would be
  • 00:59:09
    helpful uh Brian anything you want to
  • 00:59:11
    say to wrap this up oh just just a huge
  • 00:59:14
    Applause for these four individuals who
  • 00:59:16
    spent time and and and this has been
  • 00:59:18
    fascinating very much look forward to
  • 00:59:21
    seeing you at uh the Ani Summit uh the
  • 00:59:24
    the main core of the conference is going
  • 00:59:26
    to be May 1st through 3rd and then the
  • 00:59:29
    previous two days April 29th and 30th
  • 00:59:32
    are going to be some optional workshops
  • 00:59:33
    and tutorials that you can come to if if
  • 00:59:36
    if you'd like to but thank you thank you
  • 00:59:38
    so much this has been super
  • 00:59:42
    interesting
Tags
  • AI
  • ricerca di marketing
  • qualità dei dati
  • analisi qualitativa
  • modelli di linguaggio
  • summit
  • etica
  • prompt engineering
  • sistemi di raccolta dati
  • innovazione