Skip to content

Inquiry Regarding Fine-Tuning Pretrained SOTA Models in the Competition #227

@lavenderwfy

Description

@lavenderwfy

Hello,
I am currently participating in the competition and have been exploring various approaches to enhance our model’s performance.
Given that the current SOTA models on the leaderboard have demonstrated strong results, I would like to inquire whether fine-tuning a pretrained version of some SOTA models with our own innovations would be considered a valid and compliant approach within the competition's rules.

Specifically, our approach involves:

  • Utilizing the publicly available pretrained model of some SOTA methods.

  • Introducing novel ideas and modifications to improve upon the existing architecture or training strategy.

  • Fine-tuning the pretrained model with different training objectives to optimize performance.

Could you kindly confirm if this methodology aligns with the competition's regulations? Additionally, if there are any restrictions or clarifications regarding the use of pretrained models, I would greatly appreciate your guidance on how to proceed.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions