Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
About
This is a page not in th emain menu
Published:
This post will show up by default. To disable scheduling of future posts, edit config.yml
and set future: false
.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Short description of portfolio item number 1
Short description of portfolio item number 2
Published in arxiv, 2021
Solving symbolic mathematics has always been of in the arena of human ingenuity that needs compositional reasoning and recurrence. However, recent studies have shown that large-scale language models such as transformers are universal and surprisingly can be trained as a sequence-to-sequence task to solve complex mathematical equations. These large transformer models need humongous amounts of training data to generalize to unseen symbolic mathematics problems. In this paper, we present a sample efficient way of solving the symbolic tasks by first pretraining the transformer model with language translation and then fine-tuning the pretrained transformer model to solve the downstream task of symbolic mathematics. We achieve comparable accuracy on the integration task with our pretrained model while using around 1.5 orders of magnitude less number of training samples with respect to the state-of-the-art deep learning for symbolic mathematics. The test accuracy on differential equation tasks is considerably lower comparing with integration as they need higher order recursions that are not present in language translations. We pretrain our model with different pairs of language translations. Our results show language bias in solving symbolic mathematics tasks. Finally, we study the robustness of the fine-tuned model on symbolic math tasks against distribution shift, and our approach generalizes better in distribution shift scenarios for the function integration.
Recommended citation: Noorbakhsh, Kimia, Modar Sulaiman, Mahdi Sharifi, Kallol Roy, and Pooyan Jamshidi. "Pretrained Language Models are Symbolic Mathematics Solvers too!." arXiv preprint arXiv:2110.03501 (2021).
Published in PLOS ONE, 2022
Although 3D point cloud classification has recently been widely deployed in different application scenarios, it is still very vulnerable to adversarial attacks. This increases the importance of robust training of 3D models in the face of adversarial attacks. Based on our analysis on the performance of existing adversarial attacks, more adversarial perturbations are found in the mid and high-frequency components of input data. Therefore, by suppressing the high-frequency content in the training phase, the models robustness against adversarial examples is improved. Experiments showed that the proposed defense method decreases the success rate of six attacks on PointNet, PointNet++, and DGCNN models. In particular, improvements are achieved with an average increase of classification accuracy by 3.8 % on drop100 attack and 4.26 % on drop200 attack compared to the state-of-the-art methods. The method also improves models accuracy on the original dataset compared to other available methods.
Recommended citation: Naderi, H., Noorbakhsh, K., Etemadi, A., & Kasaei, S. (02 2023). LPF-Defense: 3D adversarial defense based on frequency analysis. PLOS ONE, 18(2), 1–19. doi:10.1371/journal.pone.0271388
Published in Advances in Neural Information Processing Systems (NeurIPS), 2022
Machine learning models based on temporal point processes are the state of the art in a wide variety of applications involving discrete events in continuous time. However, these models lack the ability to answer counterfactual questions, which are increasingly relevant as these models are being used to inform targeted interventions. In this work, our goal is to fill this gap. To this end, we first develop a causal model of thinning for temporal point processes that builds upon the Gumbel-Max structural causal model. This model satisfies a desirable counterfactual monotonicity condition, which is sufficient to identify counterfactual dynamics in the process of thinning. Then, given an observed realization of a temporal point process with a given intensity function, we develop a sampling algorithm that uses the above causal model of thinning and the superposition theorem to simulate counterfactual realizations of the temporal point process under a given alternative intensity function. Simulation experiments using synthetic and real epidemiological data show that the counterfactual realizations provided by our algorithm may give valuable insights to enhance targeted interventions.
Recommended citation: Noorbakhsh, Kimia, and Manuel Gomez Rodriguez. "Counterfactual Temporal Point Processes." In Advances in Neural Information Processing Systems (2022).
Published in arxiv, 2023
Recent vision architectures and self-supervised training methods enable vision models that are extremely accurate and general, but come with massive parameter and computational costs. In practical settings, such as camera traps, users have limited resources, and may fine-tune a pretrained model on (often limited) data from a small set of specific categories of interest. These users may wish to make use of modern, highly-accurate models, but are often computationally constrained. To address this, we ask: can we quickly compress large generalist models into accurate and efficient specialists? For this, we propose a simple and versatile technique called Few-Shot Task-Aware Compression (TACO). Given a large vision model that is pretrained to be accurate on a broad task, such as classification over ImageNet-22K, TACO produces a smaller model that is accurate on specialized tasks, such as classification across vehicle types or animal species. Crucially, TACO works in few-shot fashion, i.e. only a few task-specific samples are used, and the procedure has low computational overheads. We validate TACO on highly-accurate ResNet, ViT/DeiT, and ConvNeXt models, originally trained on ImageNet, LAION, or iNaturalist, which we specialize and compress to a diverse set of “downstream” subtasks. TACO can reduce the number of non-zero parameters in existing models by up to 20x relative to the original models, leading to inference speedups of up to 3×, while remaining accuracy-competitive with the uncompressed models on the specialized tasks.
Paper**
Recommended citation: D Kuznedelev*, S Tabesh*, K Noorbakhsh*, E Frantar*, S Beery, E Kurtic, D Alistarh. "Vision Models Can Be Efficiently Specialized via Few-Shot Task-Aware Compression." arXiv preprint arXiv:2303.14409 (2023).
Published:
This is a description of your talk, which is a markdown files that can be all markdown-ified like any other post. Yay markdown!
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.
CE-40221, Sharif University, CE Department (Fall 2021, Spring 2022)
Designed and graded course assignments regarding Business Letter writing, Resume writing and Scientific Paper writing. Designed and graded midterm and final exam questions.
Math-22162, Sharif University, Math Department (Spring 2021 (Head TA))
Designed and graded all of course’s assignments, Held TA’s sessions for students. [Lecture Videos]
CE-40254, Sharif University, CE Department (Fall 2020)
Designed the course’s weekly handouts, checked the credibility of course assignments, graded course final exam. [Course website]
CE-40354, Sharif University, CE Department (Fall 2021 (Head TA), Spring 2021 (Head of Assignments))
Led and managed a group of grad and undergrad TA’s to design and grade course assignments, design and grade course exams, and hold TA sessions during the semester. [Fall Course website]
CE-40115, Sharif University, CE Department (Spring 2022 (Head TA), Spring 2021 (Head TA), Spring 2020)
Led and managed a group of grad and undergrad TA’s to design and grade course assignments, design and grade course exams, and hold TA sessions during the semester. [2022 Course website]
CE-40417, Sharif University, CE Department (Spring 2022, Fall 2021)
Designed and graded Machine Learning and Deep Learning Assignments (theoretical and practical). Also designed and graded the course’s final exam questions regarding ML and DL. [Course website]